Mastering Programming with MATLAB (Coursera) [Week-1 to 4]

Table of Contents
Mastering Programming with MATLAB (Coursera) [Week-1 to 4]
RECURSION
Computer Memory
Conditional check
non iterative factorial funtion
Recursive versus Iterative
SierpiƄski triangle
Now we will plot this in efficient way
Greatest Commen Deviser (GCD)
Assignment
Problem 1: Digit Summation
Problem 2: Maximum Element
Problem 3: Reverse a Vector
Problem 4: Fibonacci Series
Problem 5: Palindrome
Variable Number of Arguments
Assignment
Problem 1: Name-value Pairs
Problem 2: Data Entry
Function Handles and Nested Functions
Anonymous function
Nested Function
Assignment
Problem 1: Autograder
Problem 2: Fun with polynomial
Mixed Mode Arithmetic
Script examples of binary arithmetic operations
Assignment
Problem 1: Edge Detection
Problem 2: Audio Mixer
Linear Equations
Simultaneous Linear Algebraic Equations
Inconsistent Equations
Overdetermined Problems
Underdetermined Problems
Ill-Conditioned vs Well- Conditioned Equations
over-determined system
Assignement
Problem: Electric circuit
Problem: Linear Regression
Live Scripts
Mauna Loa Carbon Dioxide Concentrations
Let's read the data in and plot it:
Smooth the data and plot it with the original data:
Model the growth of CO2 as exponential and compare to smoothed data:
Extend exponential model from end of monthly data to future:
Practice Quiz
Error Handling
Exception Handling
Matrix muliplication example
Assertion
Summary
Algorithmic Complexity
Fibonacci Series
reverse the elements of a vector
timeit function
Complexity notation
sequential search
Binary Search
traveling salesperson problem
matrix multiplication
Assignment
Problem 1: Recursion revisited
Problem 2: Fibonacci profiler
Efficiency in Practice
pre-allocation
Profiling
another example
Practice Quiz-
Vectorization and Other Speed-Ups
Repmat
Modes of Passing Arguments
Index Re-ordering
parfor
Summary
Practice Quiz
Object Oriented Programming
Graphical User Interfaces

RECURSION

ecursion is a powerful idea that makes it possible to solve some problems easily that would otherwise be quite difficult. Recursion cannot be used in all programming languages, but it is supported by most modern languages, including MATLAB.
A definition of a concept that uses the concept itself is called a recursive definition, and the use of a concept in the definition of the concept is called recursion.
% function f=rfact(n)
% if n==0
% f=1
% else
% f=n*rfact(n-1)
% end
% end
rfact(3)
ans = 6

Computer Memory

Local variable stored in Stack. we can iamgige like piles nof dishes (e.g. in rfact function n,f)
once it completed step by step it start coming back. in this process variables are storing in stack.
rfact(0)
ans = 1
rfact(1)
ans = 1
rfact(5)
ans = 120
% rfact(2.5)
%out of memory. infinite recursion
so now we have check condition for non negative scaler integer.
help fix
fix Round towards zero.
fix(X) rounds the elements of X to the nearest integers
towards zero.

See also floor, round, ceil.

Reference page for fix
Other functions named fix
help isscalar
isscalar True if input is a scalar.
isscalar(S) returns logical 1 (true) if SIZE(S) returns [1 1] and
logical 0 (false) otherwise.

See also isvector, isrow, iscolumn, ismatrix, isnumeric, islogical, ischar,
isempty, size.

Reference page for isscalar
Other functions named isscalar
% function f=rfact(n)
% if ~isscalar(n) || n~=fix(n) || n<0
% error("non negative scaler integer input expected")
% end
% if n==0
% f=1;
% else
% f=n*rfact(n-1);
% end
% end
rfact(2.5)
Error using rfact (line 3)
non negative scaler integer input expected
rfact(5)
ans = 120
rfact(600)
ans = Inf
rfact(50000)
ans = Inf
% rfact(69899)
rfact(69899)
Out of memory. The likely cause is an infinite recursion within the program.
Error in rfact (line 8)
f=n*rfact(n-1);
it means there not enought memory in stack to carry out.

Conditional check

we can an conditional check, right click on 2- and add condtional check.
rfact(4)
2 if ~isscalar(n) ||n~=fix(n) || n<0
we can see stack, and run code linr by line and can variable change in workspace.
End debugging session by clicking Quit Dbugging

non iterative factorial funtion

% function f=ifact(n)
% if ~isscalar(n) || n~=fix(n) || n<0
% error("non negative scaler integer input expected")
% end
% f=1;
% for ii=1:n
% f=ii*f;
% end
% end
ifact(3)
ans = 6
ifact(0)
ans = 1
ifact(69899)
ans = Inf
%rfact(69899)
rfact(69899)
Out of memory. The likely cause is an infinite recursion within the program.
Error in rfact (line 8)
f=n*rfact(n-1);
ifact(69899)
ans =
Inf
So, here we can see iterative algorithm is better.

Recursive versus Iterative

SierpiƄski triangle

help clf
clf Clear current figure.
clf deletes all children of the current figure with visible handles.

clf RESET deletes all children (including ones with hidden
handles) and also resets all figure properties, except Position,
Units, PaperPosition and PaperUnits, to their default values.

clf(FIG) or clf(FIG,'RESET') clears the single figure with handle FIG.

FIG_H = clf(...) returns the handle of the figure.

See also cla, reset, hold.

Reference page for clf
help axis
axis Control axis scaling and appearance.
axis([XMIN XMAX YMIN YMAX]) sets scaling for the x- and y-axes
on the current plot.
axis([XMIN XMAX YMIN YMAX ZMIN ZMAX]) sets the scaling for the
x-, y- and z-axes on the current 3-D plot.
axis([XMIN XMAX YMIN YMAX ZMIN ZMAX CMIN CMAX]) sets the
scaling for the x-, y-, z-axes and color scaling limits on
the current axis (see CAXIS).
V = axis returns a row vector containing the scaling for the
current plot. If the current view is 2-D, V has four
components; if it is 3-D, V has six components.

axis AUTO returns the axis scaling to its default, automatic
mode where, for each dimension, 'nice' limits are chosen based
on the extents of all line, surface, patch, and image children.
axis MANUAL freezes the scaling at the current limits, so that if
HOLD is turned on, subsequent plots will use the same limits.
axis TIGHT sets the axis limits to the range of the data.
axis FILL sets the axis limits and PlotBoxAspectRatio so that
the axis fills the position rectangle. This option only has
an effect if PlotBoxAspectRatioMode or DataAspectRatioMode are
manual.

axis IJ puts MATLAB into its "matrix" axes mode. The coordinate
system origin is at the upper left corner. The i axis is
vertical and is numbered from top to bottom. The j axis is
horizontal and is numbered from left to right.
axis XY puts MATLAB into its default "Cartesian" axes mode. The
coordinate system origin is at the lower left corner. The x
axis is horizontal and is numbered from left to right. The y
axis is vertical and is numbered from bottom to top.

axis EQUAL sets the aspect ratio so that equal tick mark
increments on the x-,y- and z-axis are equal in size. This
makes SPHERE(25) look like a sphere, instead of an ellipsoid.
axis IMAGE is the same as axis EQUAL except that the plot
box fits tightly around the data.
axis SQUARE makes the current axis box square in size.
axis NORMAL restores the current axis box to full size and
removes any restrictions on the scaling of the units.
This undoes the effects of axis SQUARE and axis EQUAL.
axis VIS3D freezes aspect ratio properties to enable rotation of
3-D objects and overrides stretch-to-fill.

axis OFF turns off all axis labeling, tick marks and background.
axis ON turns axis labeling, tick marks and background back on.

axis(H,...) changes the axes handles listed in vector H.

See also axes, grid, subplot, xlim, ylim, zlim, rlim

Reference page for axis
Other functions named axis
s = 1; c = [0;0]; % s = length of side, c = center
clf
axis([c(1)-s/2,c(1)+s/2,c(2)-s/2,c(2)+s/2],'equal')
plot(c(1)-[s,0,-s,s]/2,c(2)-[s,-s,s,s]*sqrt(3)/4,'k')
plot([1,5,2,3,5,0.5])
plot([1,2,3,1],[3/4,3,3/4,3/4])
plot(c(1)-[s,0,-s,s]/2,c(2)-[s,-s,s,s]*sqrt(3)/4)
plot([s,0,-s,s]/2,[s,-s,s,s]*sqrt(3)/4)
plot(-[s,0,-s,s]/2,-[s,-s,s,s]*sqrt(3)/4)
plot([-0.5,0,0.5,-0.5],[-0.5,0.5,-0.5,-0.5])
3.^[1:3]
ans = 1×3
3 9 27
% function sierpinski(depth)
% s = 1; c = [0;0]; % s = length of side, c = center
% clf; axis([c(1)-s/2,c(1)+s/2,c(2)-s/2,c(2)+s/2],'equal');
% title(sprintf('Sierpinski Triangle with Depth = %d', depth))
% hold on;
% sier(depth, s, c);
% hold off;
% end
% function sier(d, s, c)
% if d <= 0 % base case
% plot(c(1)-[s,0,-s,s]/2,c(2)-[s,-s,s,s]*sqrt(3)/4,'k');
% else % recursive case
% s = s/2; % cuts size in half
% h = s*sqrt(3)/2; % height
% sier( d-1, s, c - [s;h]/2 ); % bottom left
% sier( d-1, s, c + [0;h]/2 ); % top middle
% sier( d-1, s, c + [s;-h]/2 ); % bottom right
% end
% end
sierpinski(8)
No of time sieri function is called
so in depth 8 calculation-- 9841 time
sum(3.^(0:8))
ans = 9841
No. of smallest triengl=3^depth=3^8=6561
3^8
ans = 6561
Max height stack reaches here is=(depth)+1=9
sierpinski(1);
for d=[1:7];
t0=cputime;
sierpinski(d);
t(d)=cputime-t0
end
t = 1×7
0.5156 0.1563 0.0781 0.2656 0.3906 1.0000 3.1719
t = 1×7
0.5156 0.0313 0.0781 0.2656 0.3906 1.0000 3.1719
t = 1×7
0.5156 0.0313 0.1250 0.2656 0.3906 1.0000 3.1719
t = 1×7
0.5156 0.0313 0.1250 0.1719 0.3906 1.0000 3.1719
t = 1×7
0.5156 0.0313 0.1250 0.1719 0.4219 1.0000 3.1719
t = 1×7
0.5156 0.0313 0.1250 0.1719 0.4219 1.0938 3.1719
t = 1×7
0.5156 0.0313 0.1250 0.1719 0.4219 1.0938 3.2656
plot(t);
grid on

Now we will plot this in efficient way

% function sierpinskiE(depth)
% s = 1; c = [0;0]; % s = length of side, c = center
% clf; axis([c(1)-s/2,c(1)+s/2,c(2)-s/2,c(2)+s/2],'equal');
% title(sprintf('Sierpinski Triangle with Depth = %d', depth))
% hold on;
% plot(1/2*[-1,0,1,-1],sqrt(3)/4*[-1,1,-1,-1],'r--');
% pts = sier(depth, s, c, []);
% plot(pts(1,:),pts(2,:),'k-');
% hold off;
% end
% function pts = sier(d, s, c, pts)
% if d <= 0 % base case
% pts = [pts, c + [[-s,0,s,-s,nan]/2;sqrt(3)*[-s,s,-s,-s,nan]/4]];
% else % recursive case
% s = s/2; % cuts size in half
% h = s*sqrt(3)/2; % height
% pts = sier( d-1, s, c - [s;h]/2, pts ); % bottom left
% pts = sier( d-1, s, c + [0;h]/2, pts ); % top middle
% pts = sier( d-1, s, c + [s;-h]/2, pts ); % bottom right
% end
% end
sierpinskiE(7)
in 1st funcrtion sierpinski it is not possible to plot to depth 12 as computes hanged.
but sierpinskiE we can easily plot to depth 12 in less time.
sierpinskiE(1);
for d=[1:12];
t0=cputime;
sierpinskiE(d);
t(d)=cputime-t0
end
t = 1×7
0.2969 0.0313 0.1250 0.1719 0.4219 1.0938 3.2656
t = 1×7
0.2969 0.0469 0.1250 0.1719 0.4219 1.0938 3.2656
t = 1×7
0.2969 0.0469 0.1250 0.1719 0.4219 1.0938 3.2656
t = 1×7
0.2969 0.0469 0.1250 0.0469 0.4219 1.0938 3.2656
t = 1×7
0.2969 0.0469 0.1250 0.0469 0.0938 1.0938 3.2656
t = 1×7
0.2969 0.0469 0.1250 0.0469 0.0938 0.0625 3.2656
t = 1×7
0.2969 0.0469 0.1250 0.0469 0.0938 0.0625 0.0469
t = 1×8
0.2969 0.0469 0.1250 0.0469 0.0938 0.0625 0.0469 0.1094
t = 1×9
0.2969 0.0469 0.1250 0.0469 0.0938 0.0625 0.0469 0.1094 0.1406
t = 1×10
0.2969 0.0469 0.1250 0.0469 0.0938 0.0625 0.0469 0.1094 0.1406 0.4375
t = 1×11
0.2969 0.0469 0.1250 0.0469 0.0938 0.0625 0.0469 0.1094 0.1406 0.4375 1.1563
t = 1×12
0.2969 0.0469 0.1250 0.0469 0.0938 0.0625 0.0469 0.1094 0.1406 0.4375 1.1563 3.3125
plot(t);
grid on

Greatest Commen Deviser (GCD)

We studies two recursive example
Now lts understand this,
% function d= rgcd(x, y)
% if (~isscalar(x) || x~=fix(x) || x<0 || ...
% ~isscalar(y) || y~=fix(y) || y<0)
% error("X, Y must be non negative scalar");
% end
% a=max([x,y]);
% b=min([x,y]);
% if b==0;
% d=a;
% else
% d=rgcd(b,rem(a,b));
% end
% end
rgcd(21,14)
ans = 7
Iterative version of GCD
% function d= igcd(x, y)
% if (~isscalar(x) || x~=fix(x) || x<0 || ...
% ~isscalar(y) || y~=fix(y) || y<0)
% error("X, Y must be non negative scalar");
% end
% a=max([x,y]);
% b=min([x,y]);
% while b~=0;
% r=rem(a,b);
% a=b;
% b=r;
% end
% d=a;
% end
igcd(102,54)
ans = 6

Assignment

% Pramod Kumar Yadav
%[linkedin,insta,fb,github]@iampramodyadav

Problem 1: Digit Summation

Write a recursive function to sum the individual digits of an input number.
my solution
% digit summation code
% function s=digit_sum(n)
% if (~isscalar(n) || n~=fix(n) || n<0)
% error("n must be non negative scalar");
% end
%
% if n==0;
% s=0;
% else
% s=digit_sum(fix(n*0.1))+n-(fix(n*0.1))*10;
% end
%
% end
digit_sum(12345)
ans = 15
Coursera solution
% function res = cdigit_sum(n)
% if n < 10
% res = n;
% else
% res = cdigit_sum(floor(n/10)) + rem(n,10);
% end
% end
cdigit_sum(12345)
ans = 15

Problem 2: Maximum Element

Write a recursive function to find the maximum element in a vector.
my solution
v=[1,32,2,58,-55,0,12];
n=length(v)
n = 7
v(1)
ans = 1
v(2:n)
ans = 1×6
32 2 58 -55 0 12
length(v(2:n))
ans = 6
numel(v)
ans = 7
Using max function
% function m=max_in(v)
%
% n=length(v);
%
% if n==1;
% m=v(1);
% else
% m = max(v(1),max_in(v(2:n)));
% end
%
%
% end
max_in(v)
ans = 58
without using max function
% function m=cmax_in(v)
%
% n=length(v);
%
% if n==1;
% m=v(1);
% else
% m = cmax_in(v(2:n));
% if v(1)>m
% m = v(1);
% end
%
% end
cmax_in(v)
ans = 58
Solution by coursera
% function mx = recursive_max(v)
% if length(v) == 1
% mx = v(1);
% else
% % Each recursion, v(2:end) becomes smaller by 1-element
% mx = bigger(v(1),recursive_max(v(2:end)));
% end
% end
% % Cannot use the max function. Use helper function to return the larger of
% % two element
% function c = bigger(a,b)
% c = a;
% if a < b
% c = b;
% end
% end
recursive_max(v)
ans = 58

Problem 3: Reverse a Vector

Write a recursive function that returns a vector with the element of in input vector reversed.
v=[1 2 3 4 5]
v = 1×5
1 2 3 4 5
v(end)
ans = 5
v(1:end-1)
ans = 1×4
1 2 3 4
% function w=reversal(v)
%
% if length(v)==1
% w=v;
% else
% w=[v(end),reversal(v(1:end-1))];
% end
% end
reversal(v)
ans = 1×5
5 4 3 2 1

Problem 4: Fibonacci Series

Write a recursive function that computes the elements of the Fibonacci series.
% function f=fibor(n)
% if ~isscalar(n) ||n~=fix(n) || n<=0
% error("non negative scaler integer input expected")
% end
% if n==1
% f=1
% elseif n==2
% f=[1 1];
% else
% p=fibor(n-1);
% f=[p sum(p(end)+p(end-1))];
% end
% end
fibor(5)
ans = 1×5
1 1 2 3 5

Problem 5: Palindrome

Write a recursive function that determines if the input is a palindrome.
d='pendrom'
d = 'pendrom'
d(1)
ans = 'p'
% function p=palindrome(s)
% if length(s)==0
% p=true;
% elseif length(s)==1
% p=true;
% else
% p=(s(1)==s(end) && palindrome(s(2:(end-1))));
% end
% end
palindrome('pramod')
ans = logical
0
palindrome('inddni')
ans = logical
1

Variable Number of Arguments

help nargin
nargin Number of function input arguments.
Inside the body of a user-defined function, nargin returns
the number of input arguments that were used to call the
function.

nargin(FUN) returns the number of declared inputs for the
function FUN. The number of arguments is negative if the
function has a variable number of input arguments. FUN can be
a function handle that maps to a specific function, or a character
vector or string scalar that contains the name of that function.

See also nargout, varargin, narginchk, inputname, mfilename.

Reference page for nargin
Other functions named nargin
help nargout
nargout Number of function output arguments.
Inside the body of a user-defined function, nargout returns the
number of output arguments that were used to call the function.

nargout(FUN) returns the number of declared outputs for the
function FUN. The number of arguments is negative if the
function has a variable number of output arguments. FUN can be
a function handle that maps to a specific function, or a character
vector or string scalar that contains the name of that function.

See also nargin, varargout, nargoutchk, inputname, mfilename.

Reference page for nargout
Other functions named nargout
help varargin
varargin Variable length input argument list.
Allows any number of arguments to a function. The variable
varargin is a cell array containing the optional arguments to the
function. varargin must be declared as the last input argument
and collects all the inputs from that point onwards. In the
declaration, varargin must be lowercase (i.e., varargin).

For example, the function,

function myplot(x,varargin)
plot(x,varargin{:})

collects all the inputs starting with the second input into the
variable "varargin". MYPLOT uses the comma-separated list syntax
varargin{:} to pass the optional parameters to plot. The call,

myplot(sin(0:.1:1),'color',[.5 .7 .3],'linestyle',':')

results in varargin being a 1-by-4 cell array containing the
values 'color', [.5 .7 .3], 'linestyle', and ':'.

See also varargout, nargin, nargout, inputname, function, lists, paren.

Reference page for varargin
help varagout
--- help for varargout ---

varargout Variable length output argument list.
Allows any number of output arguments from a function. The
variable varargout is a cell array containing the
optional output arguments from the function. varargout must be
declared as the last output argument and must contain all the
outputs after that point onwards. In the declaration, varargout
must be lowercase (i.e., varargout).

varargout is not initialized when the function is invoked. You
must create it before your function returns. Use NARGOUT to
determine the number of outputs to produce.

For example, the function,

function [s,varargout] = mysize(x)
nout = max(nargout,1)-1;
s = size(x);
for i=1:nout, varargout(i) = {s(i)}; end

returns the size vector and optionally individual sizes. So,

[s,rows,cols] = mysize(rand(4,5));

returns s = [4 5], rows = 4, cols = 5.

See also varargin, nargin, nargout, function, lists, paren.

Reference page for varargout
help find
find Find indices of nonzero elements.
I = find(X) returns the linear indices corresponding to
the nonzero entries of the array X. X may be a logical expression.
Use IND2SUB(SIZE(X),I) to calculate multiple subscripts from
the linear indices I.

I = find(X,K) returns at most the first K indices corresponding to
the nonzero entries of the array X. K must be a positive integer,
but can be of any numeric type.

I = find(X,K,'first') is the same as I = find(X,K).

I = find(X,K,'last') returns at most the last K indices corresponding
to the nonzero entries of the array X.

[I,J] = find(X,...) returns the row and column indices instead of
linear indices into X. This syntax is especially useful when working
with sparse matrices. If X is an N-dimensional array where N > 2, then
J is a linear index over the N-1 trailing dimensions of X.

[I,J,V] = find(X,...) also returns a vector V containing the values
that correspond to the row and column indices I and J.

Example:
A = magic(3)
find(A > 5)

finds the linear indices of the 4 entries of the matrix A that are
greater than 5.

[rows,cols,vals] = find(speye(5))

finds the row and column indices and nonzero values of the 5-by-5
sparse identity matrix.

See also sparse, ind2sub, relop, nonzeros.

Reference page for find
Other functions named find
v=[1,0,32,2,58,-55,0,2];
find(v==2)
ans = 1×2
4 8
find(v==58)
ans = 5
find(v==0)
ans = 1×2
2 7
find(v==9) % not in list
ans =

1×0 empty double row vector
find(v) %in ans 2, 6 is missing as it is zero
ans = 1×6
1 3 4 5 6 8
find(v,3) %in ans 2 is missing as it is zero
ans = 1×3
1 3 4
help rng
rng Control the random number generator used by RAND, RANDI, and RANDN.
rng(SD) seeds the random number generator using the non-negative
integer SD so that RAND, RANDI, and RANDN produce a predictable
sequence of numbers.

rng('shuffle') seeds the random number generator based on the current
time so that RAND, RANDI, and RANDN produce a different sequence of
numbers after each time you call rng.

rng(SD,GENERATOR) and rng('shuffle',GENERATOR) additionally specify the
type of the random number generator used by RAND, RANDI, and RANDN.
GENERATOR is one of:

Generator Description
------------------------------------------------------------------
'twister' Mersenne Twister
'simdTwister' SIMD-oriented Fast Mersenne Twister
'combRecursive' Combined Multiple Recursive
'multFibonacci' Multiplicative Lagged Fibonacci
'v5uniform' Legacy MATLAB 5.0 uniform generator
'v5normal' Legacy MATLAB 5.0 normal generator
'v4' Legacy MATLAB 4.0 generator

rng('default') puts the settings of the random number generator used by
RAND, RANDI, and RANDN to their default values so that they produce the
same random numbers as if you restarted MATLAB. In this release, the
default settings are the Mersenne Twister with seed 0.

SCURR = rng returns the current settings of the random number generator
used by RAND, RANDI, and RANDN. The settings are returned in a
structure SCURR with fields 'Type', 'Seed', and 'State'.

rng(S) restores the settings of the random number generator used by
RAND, RANDI, and RANDN back to the values captured previously by
S = rng.

SPREV = rng(...) returns the previous settings of the random number
generator used by RAND, RANDI, and RANDN before changing the settings.

Example 1: Retrieve and Restore Generator Settings
s = rng % get the current generator settings
x = rand(1,5) % RAND generates some values
rng(s) % restore the generator settings
y = rand(1,5) % generate the same values so x and y are equal

Example 2: Restore Settings for Legacy Generator
prevS = rng(0,'v5uniform') % use legacy generator, save previous settings
x = rand % legacy startup value .9501
rng(prevS) % restore the previous settings

See Replace Discouraged Syntaxes of rand and randn to use rng to replace
RAND or RANDN with the 'seed', 'state', or 'twister' inputs.

See also rand, randi, randn, RandStream, now.

Reference page for rng
fprintf can handle unlimited no of argument, we can also erite function which can handle unlimited no of argument.
Function that cant handle unlimited no of argument
% function index = find_first(v,e)
% if nargin == 0
% error('At least one argument is required')
% elseif nargin == 1
% e = 0;
% end
% index = 0;
% indices = find(v == e);
% if ~isempty(indices)
% index = indices(1);
% end
% end
rng(0);
v=randi([-3,3],1,12)
v = 1×12
2 3 -3 3 1 -3 -2 0 3 3 -2 3
find_first(v,1) %one is at index 5
ans = 5
find_first(v,2)
ans = 1
find_first(v,12) %not in v so 0
ans = 0
find_first(v,0)
ans = 8
find_first(v) %2nd argumnet not given so by default take 0
ans = 8
Enters multiple argument in fprintf via varargin-
varargin can take maltiple argument
% function print_all(varargin)
% for ii = 1:nargin
% fprintf('Here is input argument number %d: %d\n', ii, varargin{ii})
% end
% end
print_all(pi)
Here is input argument number 1: 3.141593e+00
print_all(7,-3)
Here is input argument number 1: 7
Here is input argument number 2: -3
print_all(4,5,6,8,-3,8)
Here is input argument number 1: 4
Here is input argument number 2: 5
Here is input argument number 3: 6
Here is input argument number 4: 8
Here is input argument number 5: -3
Here is input argument number 6: 8
help fprintf
fprintf Write formatted data to text file.
fprintf(FID, FORMAT, A, ...) applies the FORMAT to all elements of array A and
any additional array arguments in column order, and writes the data to a text
file. FID is an integer file identifier. Obtain FID from FOPEN, or set it to 1
(for standard output, the screen) or 2 (standard error). fprintf uses the
encoding scheme specified in the call to FOPEN.

fprintf(FORMAT, A, ...) formats data and displays the results on the screen.

COUNT = fprintf(...) returns the number of bytes that fprintf writes.

FORMAT is a character vector that describes the format of the output fields, and
can include combinations of the following:

* Conversion specifications, which include a % character, a
conversion character (such as d, i, o, u, x, f, e, g, c, or s), and optional
flags, width, and precision fields. For more details, type "doc fprintf" at
the command prompt.

* Literal text to print.

* Escape characters, including:
\b Backspace '' Single quotation mark
\f Form feed %% Percent character
\n New line \\ Backslash
\r Carriage return \xN Hexadecimal number N
\t Horizontal tab \N Octal number N
For most cases, \n is sufficient for a single line break. However, if you
are creating a file for use with Microsoft Notepad, specify a combination of
\r\n to move to a new line.

Notes:

If you apply an integer or text conversion to a numeric value that contains a
fraction, MATLAB overrides the specified conversion, and uses %e.

Numeric conversions print only the real component of complex numbers.

Example: Create a text file called exp.txt containing a short table of the
exponential function.

x = 0:.1:1;
y = [x; exp(x)];
fid = fopen('exp.txt','w');
fprintf(fid,'%6.2f %12.8f\n',y);
fclose(fid);

Examine the contents of exp.txt:

type exp.txt

MATLAB returns:
0.00 1.00000000
0.10 1.10517092
...
1.00 2.71828183

See also fopen, fclose, fscanf, fread, fwrite, sprintf, disp.

Reference page for fprintf
Other functions named fprintf
help sprintf
sprintf Write formatted data to string or character vector
STR = sprintf(FORMAT, A, ...) applies FORMAT to all elements of
array A and any additional array arguments in column order, and returns
the results as STR. FORMAT can be a character vector or a string
scalar. The data type of STR is the same as the data type of FORMAT.

[STR, ERRMSG] = sprintf(FORMAT, A, ...) returns an error message when
the operation is unsuccessful. Otherwise, ERRMSG is empty.

sprintf is the same as FPRINTF except that it returns the data in a
MATLAB variable rather than writing to a file.

FORMAT describes the format of the output fields, and can include
combinations of the following:

* Conversion specifications, which include a % character, a
conversion character (such as d, i, o, u, x, f, e, g, c, or s),
and optional flags, width, and precision fields. For more
details, type "doc sprintf" at the command prompt.

* Literal text to print.

* Escape characters, including:
\b Backspace '' Single quotation mark
\f Form feed %% Percent character
\n New line \\ Backslash
\r Carriage return \xN Hexadecimal number N
\t Horizontal tab \N Octal number N%
where \n is a line termination character on all platforms.

Notes:

If you apply an integer or text conversion to a numeric value that
contains a decimal, MATLAB overrides the specified conversion, and
uses %e to express the value in exponential notation.

Numeric conversions print only the real component of complex numbers.

Examples
sprintf('%0.5g',(1+sqrt(5))/2) % 1.618
sprintf('%0.5g',1/eps) % 4.5036e+15
sprintf('%15.5f',1/eps) % 4503599627370496.00000
sprintf('%d',round(pi)) % 3
sprintf('%s','hello') % hello
sprintf('The array is %dx%d.',2,3) % The array is 2x3.

See also fprintf, sscanf, num2str, int2str, char, string, compose.

Reference page for sprintf
Other functions named sprintf
sprintf('the first three positive integer are %d,%d and %d',1,2,3)
ans = 'the first three positive integer are 1,2 and 3'
We are goint to write a function as,
% function out = print_num(format,varargin)
% out = '';
% argindex = 1;
% skip = false;
% for ii = 1:length(format)
% if skip
% skip = false;
% else
% if format(ii) ~= '%'
% out(end+1) = format(ii);
% else
% if ii+1 > length(format)
% break;
% end
% if format(ii+1) == '%'
% out(end+1) = '%';
% else
% if argindex >= nargin
% error('not enough input arguments');
% end
% out = [out num2str(varargin{argindex},format(ii:ii+1))];
% argindex = argindex + 1;
% end
% skip = true;
% end
% end
% end
% end
print_num('the first three positive integer are %d,%d and %d',1,2,3)
ans = 'the first three positive integer are 1,2 and 3'
pct=20;
a=934.40;
print_num('%d%% of %f is %f',pct, a, a*pct/100)
ans = '20% of 934.400000 is 186.880000'
% function varargout = distribute(v)
% for ii = 1:length(v)
% varargout{ii} = v(ii);
% end
% end
[a,b,c]=distribute([14,-pi,0])
a = 14
b = -3.1416
c = 0
[a,b]=distribute([14,-pi,0])
a = 14
b = -3.1416
%[a,b,c]=distribute([14,-pi,0,g]) error will come
One last detail that we should mention is that, just as with varargin, you can include normal local variables along with varargout in the list of formal output arguments in the function signature. But as with varargin, varargout must be the last argument in the list. In analogy to varargin, varargout holds any excess output arguments that come after the normal ones.
All right, we've relearned what nargin and nargout do, and we've learned what varargin and varargout do. The first two count arguments, and the second two store variable numbers of arguments. That's how MATLAB allows you to define functions that can handle unlimited numbers of arguments.

Assignment

Problem 1: Name-value Pairs

Write a function that pairs up the keys and the respective values from unspecified number of input
% % Problem 1: Name-value Pairs
% % Write a function that pairs up the keys and the
% % respective values from unspecified number of input
%
% function db = name_value_pairs(varargin)
%
% if nargin>0 && rem(nargin, 2) == 0
% db = cell(nargin/2, 2);
%
%
% for ii = 1:nargin
% if rem(ii,2) ~= 0
% if ischar(varargin{ii})
% db{(ii+1)/2,1} = varargin{ii};
% else
% db ={};
% return;
% end
% else
% db{ii/2,2} = varargin{ii};
% end
%
%
% end
% else
% db = {};
% end
% end
name_value_pairs('name','John Smith','age',32,'children',{'Joe','Jill'})
ans = 3×2 cell array
{'name' } {'John Smith'}
{'age' } {[ 32]}
{'children'} {1×2 cell }

Problem 2: Data Entry

Write a function that keys in new voters' information into the current database.
% function database = voters(database,varargin)
% % Get the length of the input database
% count = length(database);
%
% % Create a copy of the database. This will be the new database if input
% % is valid
% tmp = database;
%
% % Names and IDs come in pairs. Increment loop counter by 2
% for ii = 1:2:length(varargin)
% % Make sure the Name is a char or string
% if ischar(varargin{ii}) || isstring(varargin{ii})
% count = count + 1;
% tmp(count).Name = string(varargin{ii});
% % Make sure there is a valid ID
% if ii+1 <= length(varargin) && isnumeric(varargin{ii+1}) && round(varargin{ii+1}) == varargin{ii+1}
% tmp(count).ID = varargin{ii+1};
% else
% % Not valid input. Return original database
% return;
% end
% else
% % Not valid input. Return orginal database
% return;
% end
% end
% % All inputs valid. Update database.
% database = tmp;
% end
database = voters([], 'Adam', 11111)
database = struct with fields:
Name: "Adam"
ID: 11111
database = voters(database, 'Eve', 22222)
database = 1×2 struct array with fields:
Name
ID
database(1)
ans = struct with fields:
Name: "Adam"
ID: 11111
database(2)
ans = struct with fields:
Name: "Eve"
ID: 22222

Function Handles and Nested Functions

For example, some functions can take other functions as arguments. The integral function, for instance, uses numerical approximation to estimate the integral of any function supplied as an input argument. Another case will come up in a couple of weeks when we'll show you how to create a graphical user interface, and you'll learn that you can specify which function should be called when various buttons are clicked. These are termed callback functions and you specify them using function handles
trig=@sin
trig = function_handle with value:
@sin
x=trig(pi/2)
x = 1
plot(trig(0:0.01:2*pi))
close all
x=trig
x = function_handle with value:
@sin
This time, MATLAB saw that we did not include an input argument, so instead of using the function handle to call the sin function and copying the return value into X, you just copy the function handle itself into X. So now we can call a sin function with X.
x(pi/2)
ans = 1
But what about functions that you call with no arguments? Like the function Pi which may not seem like a function but actually is a function, a built-in function. It just doesn't look like a function because it takes no input arguments and returns just one output argument. Let's assign a function handle for Pi to the variable mypi.
x=pi
x = 3.1416
mypi=@pi
mypi = function_handle with value:
@pi
Now let's try to use it to give X the value Pi again. Well, that didn't work.
x=mypi
x = function_handle with value:
@pi
To call the function, just include empty parentheses like this,
y=mypi()
y = 3.1416
If you want to put together an array of several function handles, you can use regular arrays
xpt={@sin @cos @plot}
xpt = 1×3 cell array
{@sin} {@cos} {@plot}
xpt{2}(0)
ans = 1
Calling the function through cell array indexing might look a little bit strange at first. This command calls the cosine function with an input argument of zero.
xpt{1}(pi/2)
ans = 1
It first uses curly braces to select the appropriate element of the cell array and then uses regular parentheses to provide the input arguments to the selected function.
Here's a command "impress your friends."
xpt{3}(xpt{1}(-pi:0.01:pi))
We just called plot by putting a three in curly braces to draw the sign by putting a one in curly braces from minus pi to pi.
help fplot
fplot Plot 2-D function
fplot(FUN) plots the function FUN between the limits of the current
axes, with a default of [-5 5].

fplot(FUN,LIMS) plots the function FUN between the x-axis limits
specified by LIMS = [XMIN XMAX].

fplot(...,'LineSpec') plots with the given line specification.

fplot(X,Y,LIMS) plots the parameterized curve with coordinates
X(T), Y(T) for T between the values specified by LIMS = [TMIN TMAX].

H = fplot(...) returns a handle to the function line object created by fplot.

fplot(AX,...) plots into the axes AX instead of the current axes.

Examples:
fplot(@sin)
fplot(@(x) x.^2.*sin(1./x),[-1,1])
fplot(@(x) sin(1./x), [0 0.1])

If your function cannot be evaluated for multiple x values at once,
you will get a warning and somewhat reduced speed:
f = @(x,n) abs(exp(-1j*x*(0:n-1))*ones(n,1));
fplot(@(x) f(x,10),[0 2*pi])

See also fplot3, fsurf, fcontour, fimplicit, plot, function_handle.

Reference page for fplot
fplot(@sin,[0 2*pi])
but you might argue that we could do the same thing without a function handle, a good old-fashioned way with plot.
plot(0:0.01:2*pi, sin(0:0.01:2*pi))
Sure, but here we had to provide the actual vector of x values twice while fplot does all that for us, and it gets better, look at this example.
fplot(@tan,[0,pi])
now, lets plot this via plot function
plot(0:0.01:pi, tan(0:0.01:pi))
not so good, now let change the resolution
plot(0:0.001:pi, tan(0:0.001:pi))
still not good.
now we can understand why fplot is important.

Anonymous function

Let's say we have a polynomial expression that we need to use a few times in our program. We could create a sub function, but it would be overkill to do that for a single expression, not to mention that we'd need to make up a meaningful name for the function. Instead, MATLAB allows us to create a so-called anonymous function as a single expression and return a function handle to it all in one fell swoop.
poly=@(x) 2*x.^3-x.^2+2*x-12;
poly(1)
ans = -9
poly(0:5)
ans = 1×6
-12 -9 4 39 108 223
plot(-10:10,poly(-10:10))
fplot(poly,[-10,10])
xnf=@(x,y) x+y;
xnf(1,2)
ans = 3
x=1000
x = 1000
y=2000
y = 2000
xnf(11,12)
ans = 23
x,y
x = 1000
y = 2000
Here we first gave x and y the values 1,000 and 2,000. Then we called xfn, giving its local variables with the same names x and y, the values 10 and 12. After the function return, we checked x and y and found that they still have the same values that they had before the function was called. So the variables outside the function are not accessible inside the function if they have the same names as one or the arguments.
c=10;
f=@(x) c*x;
f(3)
ans = 30
If it were a global reference to c, then if we change c to 11 and rerun the function, we'd get 33, which is exactly what would happen if f were the handle of an ordinary function that made a global reference to c. But that is not what happens for an anonymous function.
Here we change c from 10 to 11, but the function did not change. It's still multiplied its input argument by 10.
c=11;
f(3)
ans = 30
You can even remove c from the workplace and it'll still have no effect on our function.
clear c
f(3)
ans = 30
Aanonymous function construct is called anonymous, which means literally not having a name. Didn't we give the name f to the anonymous function that we just defined, for example? Well, no. We put the handle for this function into a variable named f, but the function itself had no name. Contrast that with the first example we gave of a function handle.
trig=@sin
trig = function_handle with value:
@sin
In that case, the name of the function is sine and trig is the name of the variable that holds the handle for sine. We can drive the anonymity of an anonymous function home by using f plot again.Here we've given f plot an anonymous function whose expression is x plus the sine of x.
fplot(@(x) x+sin(x),[-5,5])
It plots the function from minus five to five, and yet there's no name for the function anywhere. So we've seen how to define anonymous functions and we've seen examples with one input argument and two input arguments. Of course you can have as many input arguments as you wish. Multiple output arguments are also supported, but in a very limited way. The only way to get multiple outputsis to have the expression, that is the body of the anonymous function, be a call of a function that already returns multiple arguments. Here's an example.
smax=@(A) max(A.^2)
smax = function_handle with value:
@(A)max(A.^2)
[mx ind]=smax([1 2;3 4])
mx = 1×2
9 16
ind = 1×2
2 2
with anonymous functions, you can get multiple outputs only by calling a function that already produces multiple outputs. But you can get a lot of versatility by using the deal function
xyz=@(x,y) deal(x*y, x+y)
xyz = function_handle with value:
@(x,y)deal(x*y,x+y)
[p, s]=xyz(10,20)
p = 200
s = 30
A normal function is a function name and can have any number of statements in its body, but an anonymous function has no name, and it can have only one statement in its body. Also, if you want to have multiple outputs from an anonymous function, it's one executable statement has to be a call to a function that already returns multiple outputs.
In addition to the three kinds of functions that we've seen so far, the main function, the sub-function, and the anonymous function, there's one more function that we need to talk about, it's called the nested function.

Nested Function

it's similar to a sub-function in that it lives in an M-file with a main function, just like a sub-function does. However, a nested function is actually defined inside the body of another function, also called its parent function.
% function [y1, y2] = first_nested_example(x)
% c = 10;
% sub(c,x);
% y1 = inner(x);
% %nested function start here
% function out = inner(in)
% out = c*in;
% end
% %nested function end here
% c = 11;
% sub(c,x)
% y2 = inner(x);
% end
% %sub function start here
% function sub(in1,in2)
% fprintf('Multiplying %d times %d\n',in1,in2)
% end
% %sub function end here
A nested function has access not only to all the variables inside it, but also to variables inside its parent function. Similarly, the parent function has access not only to all the variables inside it, but also to variables inside it's nested functions.
[a1,b1]=first_nested_example(3)
Multiplying 10 times 3
Multiplying 11 times 3
a1 = 30
b1 = 33
take an another example
% function circle_area = assignment_rule(r)
%
% calculate_area
% fprintf('Area of circle with radius %.1f = %.1f\n',r,circle_area)
%
% function calculate_area
% circle_area = pi*r^2;
% end
% end
assignment_rule(3)
Area of circle with radius 3.0 = 28.3
ans = 28.2743
% function A
% xA = 1;
% function B
% xB = 2;
% function C
% xC = 3;
% show('C','xA',xA)
% show('C','xB',xB)
% show('C','xC',xC)
% end % C
% show('B','xA',xA)
% show('B','xB',xB);
% C
% D
% end % B
% function D
% xD = 4;
% show('D','xA',xA);
% show('D','xD',xD);
% end % D
% show('A','xA',xA)
% B
% D
% end % A
% function show(funct,name,value)
% fprintf('in %s: %s = %d\n',funct,name,value);
% end
you can see here the accessibilities of all the variables. In A, xA is accessible; in B, xA and xB are accessible; in C, xA, xB and xC are accessible, and in D, xA and xD are accessible, and D is called twice, once down here in A, and once up here in B. This second call of D brings up a visibility rule for nested functions that we haven't mentioned yet, it's the rule that tells us where a function is accessible, where it can be called. There are actually three paths for accessibility. A parent can call a child as for example here, A calls B and D, but it can't call a grandchild, so A can't call C. A sibling can call a sibling, has for example, B can call D, and a descendant can call any ancestor, so C can call B or A, for example. The sibling calling feature is interesting because of the difference from the variable sharing rule. While sibling functions can call each other, they can't access each other's local variables. Access to functions goes up and down and side to side, but access to variable goes only up and down.
A
in A: xA = 1
in B: xA = 1
in B: xB = 2
in C: xA = 1
in C: xB = 2
in C: xC = 3
in D: xA = 1
in D: xD = 4
in D: xA = 1
in D: xD = 4
Remember this, we set c equal to 10,
c=10;
f=@(x) c*x;
f(3)
ans = 30
c=11;
f(3)
ans = 30
clear c
f(3)
ans = 30
The value 10 is baked into the anonymous function, to change it, you'd have to define the anonymous function again. Let's suppose you need to do that, and you need to do it multiple times with different values of c, but suppose also that like me, you don't want to spend your valuable time defining an anonymous function over and over. Well, you can write a function to do it for you like this.
% function fh = get_anon_handle(c)
%
% fh = @(x) c*x;
%
% end
f10=get_anon_handle(10)
f10 = function_handle with value:
@(x)c*x
f11=get_anon_handle(11)
f11 = function_handle with value:
@(x)c*x
f10(3)
ans = 30
f11(3)
ans = 33
so, here via a function we can define anonymas with different c
% function fh = get_polynomial_handle(p)
%
% function polynomial = poly(x)
% polynomial = 0;
% for ii = 1:length(p)
% polynomial = polynomial + p(ii).*x.^(ii-1);
% end
% end
%
% fh = @poly;
% end
pc=get_polynomial_handle([-4,-1,3,1])
pc = function_handle with value:
@get_polynomial_handle/poly
pq=get_polynomial_handle([-10,0,7])
pq = function_handle with value:
@get_polynomial_handle/poly
pd=get_polynomial_handle(1:5)
pd = function_handle with value:
@get_polynomial_handle/poly
pd(1)
ans = 15
pc(1)
ans = -1
pq(1)
ans = -3
It's just like what we saw for the anonymous function. Once you create a handle to a function, the values of it's non-local variables in this case p are baked in. So that's it for nested functions. But before we leave, let's plot pc and pq using f plot, we plot them over the range from minus 3 to 2.
fplot(pc,[-3,2]);
hold on
fplot(pq,[-3,2])
let's finish by recapping what we've learned.

Assignment

Problem 1: Autograder

Write a function that compare two different functions' results.
help isequal
isequal True if arrays are numerically equal.
isequal(A,B) returns logical 1 (TRUE) if arrays A and B are the same
size and contain the same values, and logical 0 (FALSE) otherwise.

If A is defined and you set B = A, isequal(A,B) is not necessarily
true. If A or B contains a NaN element, isequal returns false because
NaNs are not equal to each other by definition.

isequal(A,B,C,...) returns logical 1 if all the input arguments are
numerically equal, and logical 0 otherwise.

When comparing numeric values, isequal does not consider the class
of the values in determining whether they are equal. In other words,
INT8(5) and SINGLE(5) are considered equal. This is also true when
comparing numeric values with certain nonnumeric values. Numeric 1
is equal to logical 1. The number 65 is equal to the character 'A'.

When comparing handle objects, use EQ or the == operator to test
whether objects are the same handle. Use isequal to test if objects
have equal property values, even if those objects are different
handles.

isequal recursively compares the contents of cell arrays and
structures. If all the elements of a cell array or structure are
numerically equal, isequal returns 1.

When comparing structures, the order in which the fields of the
structures were created is not important. As long as the structures
contain the same fields, with corresponding fields set to equal values,
isequal considers the structures to be equal.

See also isequaln, eq.

Reference page for isequal
Other functions named isequal
% function f=grader(fn1,fn2, varargin)
%
% for n=1:length(varargin);
% if isequal(fn1(varargin{n}),fn2(varargin{n}));
% f=true;
% else
% f=false;
% end
% end
%
% end
grader(@sin,@max,0)
ans = logical
1
grader(@sin,@max,0,1)
ans = logical
0
grader(@cos,@cos,0,1,[-pi,0,pi])
ans = logical
1
oficial solution
*********
% function pass = cgrader(fn1,fn2,varargin)
% pass = false;
% for ii = 1:length(varargin)
% if ~isequal(fn1(varargin{ii}),fn2(varargin{ii}))
% return;
% end
% end
% pass = true;
% end
*******

Problem 2: Fun with polynomial

Write a function that create a function handle of polynomials. (modify to remove for loop)
% function pf = poly_fun(p)
%
% function polynomial = poly(x)
%
% polynomial=sum(p.*x.^(0:length(p)-1))
%
% end
%
% pf = @poly;
% end
pc=poly_fun(1:5)
pc = function_handle with value:
@poly_fun/poly
pc(1)
polynomial = 15
ans = 15

Mixed Mode Arithmetic

single- Single precision floating pont number.(32 bit)
double-Double precision floating pont number (64 bit)
4*pi
ans = 12.5664
int8(200)+int8(300)
ans = int8
127
four times pi here has one operator multiplication, also known as times and two operations four and pi. Because they're too operands, we call this a binary operation. The type of each of these operations is double, which is the default and MATLAB. But you can specify different types by using MATLABs casting functions like at eight for example.
That's because 500 is too large to fit into a little old and eight. And so the result we get is the closest number to 500 that lies inside the range of numbers that do fit into an innate, which goes from minus 128 to plus 127.
So far, all our operations have used operations of the same type. So none of these expressions is an example of mixed mode arithmetic. We'll get there, but in the meantime, there's another way not to have operations with mixed types, and that's just a have one operand like this.
-3
ans = -3
We still have one operator, the minus, but only one operand the three This is called a unitary operation.
here is a table showing the shape compatibility rules for binary operators.
Here we have five rules for a binary operation, which we show here as an operand x and operator op and a second operand y. Where x and y could be any scaler's or two dimensional arrays who shapes are compatible for the operator op. For example, plus minus, and all the dot operators which are known collectively, is array operators require that the two operations x and y have the same size and shape.
For example, plus minus, and all the dot operators which are known collectively, is array operators require that the two operations x and y have the same size and shape.
Rule B is for multiplication, also known as matrix multiplication. And it says that the size of the second dimension of x must equal the size of the first dimension of y.
Or to put it another way, the number of columns of x must equal the number of rows of y. Here's an example that follows that rule.Yeah, it follows it, because the first operandi has three columns in the second operandi has three rows.
[1 2 3;3 4 5]*[6;7;8]
ans = 2×1
44
86
if weswop will get error.
The last rule E for the exponentiation operator, also known as the power operator, says that both operations must be square. So for example, 2 buy 2 is fine, but 2 by 3 is not, and one or both of them must be scaler.
Let's try it for a three by three matrix raised to a scaler power.
[1 2 3;2 5 8;9 6 3]^2
ans = 3×3
32 30 28
84 77 70
48 66 84
but if do non square matrix to a power-
it will give error.
So the last rule required that one of the operations be a scaler. And on the subject of scaler, I need to point out that the other four rules include special cases for scaler is that we haven't mentioned yet. We had those special cases.
The rules look like this.
The special cases are that if one operand is a scaler, the other operand could be any shape it all. The operator is applied to the scaler and each element in the array, with the only stipulation being that for the two slash operators, rule C and D scaler must be in the denominator These five rules have been part of my lab for decades but in September of 2016, when version 2016b appeared.
It's rule F here, which says that you can now add vectors to arrays or subtract vectors from arrays. Or perform any array operation for that matter not just any vector in any array though. You can only do it if the vector is either a row vector with a length equal to the row length of the array. Or a column vector with the length equal to the column length of the array.

Script examples of binary arithmetic operations

A = [100 200 300; 400 500 600]
A = 2×3
100 200 300
400 500 600
c = [7;9]
c = 2×1
7
9
A_plus_c = A + c
A_plus_c = 2×3
107 207 307
409 509 609
it added the column vector to each column of the array.7 was added to every element on the first row, and 9 was added to every element on the second row.
if you're driving an old installed model older than 2016b, you'll get an error message that says matrix dimensions must agree. But don't despair, you can accomplish the same thing by using the built in function, repmat, to make an array of copies of the vector.
C = repmat(c,1,3)
C = 2×3
7 7 7
9 9 9
now we can add
A+C
ans = 2×3
107 207 307
409 509 609
Here according rule 6-
rule-2 and 5
int8(123) - int8(40)
ans = int8
83
uint16([1 2; 3 4]) + uint16([64 28;10 55])
ans = 2×2 uint16 matrix
65 30
13 59
%uint16([1 2; 3 4])* uint16([64 28;10 55])
uint16([1 2; 3 4])*uint16(64)
ans = 2×2 uint16 matrix
64 128
192 256
uint16(4)*uint16([64 28; 10 55])
ans = 2×2 uint16 matrix
256 112
40 220
uint16([1 2; 3 4])*64
ans = 2×2 uint16 matrix
64 128
192 256
4*uint16([64 28; 10 55])
ans = 2×2 uint16 matrix
256 112
40 220
single(1) + double(2)
ans = single
3
int64(3)^2
ans = int64
9
%int64(9)^0.5
%sqrt(int64(9))
as matlab has to give whole no in output (as above example int64 9) but fractional power dont give whole no so cause error.
In fact, there is just one rule. The type of the output of any legal arithmetic operation is the same as the type of the operation that occupies the least space and memory.
So, for example, if we had an 8 bit integer to a double, the result is an 8 bit integer. Because an 8 bit integer occupies less space than a double. Which you may remember uses 64 bits.
Or say we multiply an array of singles by an array of doubles. And the type of the output is single because a single occupies only 32 bits.
Extending the rule to non mixed mode arithmetic. When both operands have the same type. Then, of course, the output has that same type too.
n = int16(9876)
n = int16
9876
x = 12
x = 12
So here we've added a double to a 16 bit integer. Since the 16 bit integer occupies only 16 bits, while the double occupies 64 bits, the result is an int6. Here's more operations. These are all mixed mode arithmetic operations because the operations have different types. And in every case we get an int 16.
x + n
ans = int16
9888
n*x
ans = int16
32767
n/x
ans = int16
823
n^x
ans = int16
32767
x/n
ans = int16
0
as int cant holde fraction.
x/9876
ans = 0.0012
here it eill give ans in fraction.
So why does MATLAB choose the smaller type for its result? Well, that's a very good question. As a matter of fact, most other computer programming languages do give you a double when you mix a double in an integer. But MATLAB has a very good answer to this very good question. It produces an integer to save memory.
Let's see an example of how MATLAB approach saves memory. I have an image of the MATLAB logo saved as a PNG file here in this folder.
Let's read that into a variable and display it.
clear;
M=imread('matlab.png');
imshow(M)
As you can see over here in the workspace, the variable M,
which contains the image, is a 400 by 400 by 3 array, meaning that it has 400 columns, 400 rows and 3 pages. Each page of the array contains a 400 by 400 array of color intensities. One page each for the red, green and blue colors, also known as an RGB image, in which each set of red, green and blue values makes up one pixel.
What's most important to us though here is that the array is of type Uint8.
We can calculate how much memory space this image requires, but let's take the easy way and let the function whos tell us, 480,000 bytes.
whos
Name Size Bytes Class Attributes

M 400x400x3 480000 uint8
Now, let's do a little image processing on it. I'll create a darker version of the image by dividing all the intensity values by three.
D=M/3;
imshow(D)
whos
Name Size Bytes Class Attributes

D 400x400x3 480000 uint8
M 400x400x3 480000 uint8
So the size of the image has not changed.
480,000 bytes for each of them.
That's because of MATLAB rule that says that the type of the output of the arithmetic operation that produced D is the same is the type of its smallest operand.
The two operands were M, which is a Uint8 and three, which is a double, so the result is a Uint8. If we were using a language like say, C or C++ or Java, which would produce a double instead of Uint8 as output, the result would be eight times as big. Here's the equivalent operation in those languages.
D=double(M)/3;
What we've done is converted the eight bit M into a 64 bit double.
So now the type of the operand that was taking less space has been changed to match the type of the operand, and that takes more space. This step, which is done before the arithmetic operation is applied, is called widening and widening is exactly what is done by C, C++ and Java from mixed mode arithmetic operations. So let's see how big D is now.
whos
Name Size Bytes Class Attributes

D 400x400x3 3840000 double
M 400x400x3 480000 uint8
Almost four megabytes. You may think that four megabytes is nothing these days, and you would be right, but once you start dealing with medical images such as MRI or CT, which can consist of over 100 individual images. Or if you think of videos that can have 30 or even 60 frames per second. All of a sudden we're talking about gigabytes and a factor of it becomes crucial. MATLABs approach, by comparison, is to use narrowing like this.
D=M/uint8(3);
So now the type of the operation which takes more space is changed to match the type of one that takes less space, thereby saving memory. And it's not just a matter of saving memory, operations on wider operand take more time too. So MATLABs approach of narrowing in mixed mode arithmetic saves both space and time. The downside is that during narrowing values around it off and you'll find people online railing away about that, but in fact the difference of 0.5 is rarely even detectable in an image.
And you can always override this behavior by explicitly widening the smaller operand.
If you do wide and make sure you widen the operand and as we have done and not the result of the operation, if you widen the result of the operation, you could lose accuracy.
Here's a simpler example to show what happens.
a=int8(17)
a = int8
17
b=double(a)/2
b = 8.5000
c=double(a/2)
c = 9
So, B and C have the same type double, but they have different values. Do you see why?
Well, it's because for B, we widen the operand and for c, we widened the result of the operation. When we calculate B here, we first convert A to a double and then divide by 2.
And since 2 is by default, also a double, the results of the operation is a double, but when we calculate C, we first divide by 2.
And since a is an int8, the result is an int8, which means that the exact answer 8.5 is rounded to the end of your 9. We then convert that 9 from an int8 to a double.
And now let's see what happens if we use an int8 type for the 2, as well as for the A.
f=int8(2)
f = int8
2
b=double(a)/f
b = int8
9
c=double(a/f)
c = 9
This time B and C have different types, but the same value.
Well, here again for B, we widen the operand and for C, we widen the result of the operation. But this time since F is an int8, the result for B is an int8, so the result is rounded to 9.
Summary

Assignment

% Pramod Kumar Yadav
%[linkedin,insta,fb,github]@iampramodyadav

Problem 1: Edge Detection

Write a function that detects the edge of objects in an image.
[r, c] = size(M)
r = 400
c = 1200
out = zeros(r-2,c-2);
[r, c] = size(out)
r = 398
c = 1198
% function out = edgy(in)
% [n, m] = size(in);
% out = zeros(n-2,m-2);
% [n, m] = size(out);
% %for calc
% in = double(in);
%
% m1 = [-1 0 1; -2 0 2; -1 0 1];
% m2 = [1 2 1; 0 0 0; -1 -2 -1];
%
%
% for ii = 1:n
%
% for jj = 1:m
% sx = in(ii:ii+2,jj:jj+2) .* m1;
% sy = in(ii:ii+2,jj:jj+2) .* m2;
% %output pixel value
% out(ii,jj) = sqrt(sum(sum(sx(:)))^2 + sum(sum(sy(:)))^2);
% end
%
% end
%
% %convrt again
% out = uint8(out);
% end
clear;
M=imread('matlab.png');
imshow(M)
edg = edgy(M);
figure
imshow(edg);
Warning: Image is too big to fit on screen; displaying at 67%
Oficial solution
% function out = edgy(in)
% % Get the size of the input image
% [r, c] = size(in);
%
% % Create an output array that is two rows and columns smaller
% out = zeros(r-2,c-2);
%
% % Use the size of the new array for looping
% [r, c] = size(out);
%
% % Convert to double for doing calculations
% in = double(in);
%
% % Create the horizontal and vertical edge detector filters
% ex = [-1 0 1; -2 0 2; -1 0 1];
% ey = [1 2 1; 0 0 0; -1 -2 -1];
% for ii = 1:r
% for jj = 1:c
% sx = in(ii:ii+2,jj:jj+2) .* ex;
% sy = in(ii:ii+2,jj:jj+2) .* ey;
% % Calculate the output pixel value
% out(ii,jj) = sqrt(sum(sum(sx(:)))^2 + sum(sum(sy(:)))^2);
% end
% end
% % Convert back to uint8
% out = uint8(out);
% end
edg = cedgy(M);
figure
imshow(edg);
Warning: Image is too big to fit on screen; displaying at 67%

Problem 2: Audio Mixer

Write a function that mix a multi-column audio recording into one single record.
help flip
flip Flip the order of elements
Y = flip(X) returns a vector Y with the same dimensions as X, but with the
order of elements flipped. If X is a matrix, each column is flipped
vertically. For N-D arrays, flip(X) operates on the first
nonsingleton dimension.

flip(X,DIM) works along the dimension DIM.

For example, flip(X) where

X = 1 4 produces 3 6
2 5 2 5
3 6 1 4


Class support for input X:
float: double, single
integers: uint8, int8, uint16, int16, uint32, int32, uint64, int64
char, logical

See also fliplr, flipud, rot90, permute.

Reference page for flip
Other functions named flip
% function v = mixit(M,N)
% if size(M,2) ~= length(N)
% v = [];
% else
% N = N(:);
% M = (2*double(M)/(2^16-1))-1;
% v = M*N;
% if max(abs(v)) > 1; %if max of the abs value greater than 1
% v = v/max(abs(v)); %devide entire vector wth that
% end
% end
% end
A = 2.^(0:16)';
A = [A flip(A)] - 1;
A = uint16(A);
format long
mixit(A,[1 1])
ans = 17×1
0
-0.503913879277201
-0.755847750865052
-0.881768550557478
-0.944636678200692
-0.975886197616301
-0.991141868512111
-0.998031526336025
-1.000000000000000
-0.998031526336025

Linear Equations

Simultaneous Linear Algebraic Equations

Because of the definition of matrix multiplication, the matrix equation Ax = b corresponds to a set of simultaneous linear algebraic equations:
Example-
We can solve by hand,
x1=3.5652
x2=-1.6522
Lets solve it in matlab,
Here we have Ax = b, where the matrix A and the vector b are as follows:
A=[4 5;5 -2];
b=[6;14];
We can solve it,
x=A\b
x = 2×1
2.484848484848485
-0.787878787878788
to check if it is correct,
A*x
ans = 2×1
6
14
the forward slash doesn't work with A and b no matter which one comes first. but backward slash work fine in both cases-
%A/b
% b/A
A\b
ans = 2×1
2.484848484848485
-0.787878787878788
b\A
ans = 1×2
0.405172413793103 0.008620689655172
from the compatibility rules we gave you for the forward and backslash operators in our lecture on mixed-mode arithmetic. The appropriate rules are; that the operands must have the same number of columns for the forward slash, and the same number of rows for the backward slash.
let's solve one with 20 equations in 20 unknowns. I'll use the function rand held me gin up a matrix of coefficients and a vector be of consonants.
A=rand(20,20);
b=rand(20,1);
x=A\b;
now we will check error, very samall error so solution is very good.
e=A*x-b;
max(abs(e))
ans =
6.439293542825908e-15

Inconsistent Equations

In some cases the solution that you get from MATLAB is not so good, it's not because MATLAB has made a mistake or has a bad algorithm, not a bit of it. MATLABs algorithm is extremely good. When the solution is not good, it's because it's not possible to get a good solution.
This situation surfaces when there is inconsistent information in the inputs
Clearly this pair of equations is inconsistent. A plot of the lines represented by each of the equations will reveal that they are parallel.
A=[4 5;4 5]
A = 2×2
4 5
4 5
b=[6;12]
b =
6
12
x=A\b
Warning: Matrix is singular to working precision.
x =
-Inf
Inf
to understand it, it helps to visualize the equations on a two-dimensional plot. We can visualize any equation that has only two unknowns. We just plot pairs of values for x_1 and x_2 that satisfy the equation with x_1 on the horizontal axis and x_2 on the vertical axis. To do that, we solve the equation for x_2 in terms of x_1. We give x_1 a range of values, calculate x_2 for each of those values, and then plot the results like this.
x1=0:0.01:10;
x2=(6-4*x1)/5;
plot(x1,x2);
grid on
As you can see, the points form a straight line, which is a consequence to the fact that we're plotting a linear equation. For this plot as X_1 goes towards plus infinity, x_2 goes toward minus infinity. As X_1 goes towards minus infinity, x_2 goes towards plus infinity.
Now, let's look at the other equation. We'll use the same X_1 values, but solve the second equation for x_2 instead of the first one.
x2=(12-4*x1)/5;
hold on
plot(x1,x2);
These are parallel lines and the point at which the parallel lines intersect is, well, nowhere. They don't intersect. But if they were just the tiniest bit slanted relative to each other, we would know that they would intersect somewhere either way off to the right or way off to the left. The less slanted they were, the farther the intersection point would be from the origin. Well, now let's remember exactly what the whole warning said.
x=A\b
Warning: Matrix is singular to working precision.
x =
-Inf
Inf
Matrix is singular to working precision. The working precision it's talking about is double-precision because that's the type of our matrix. MATLAB is telling you that there may be some roundoff error causing the matrix to appear to be singular. If it's not singular, then it means that instead of being parallel, these lines are tilted just the tiniest bit.
Now let's do the same thing with equations that do have a solution, like our first set, and now you know the drill. We get an expression for x2 in terms of x1 for each equation, which defines a line for each equation and then plot both lines on the same graph.
close all
x1=0:0.01:10;
x2=(6-4*x1)/5;
hold off
plot(x1,x2);
grid off
x2=(14-3*x1)/(-2);
hold on
plot(x1,x2);
A=[4 5;3 -2];
b=[6;14];
x=A\b
x =
3.565217391304348
-1.652173913043478
plot(x(1),x(2),'*')
The solution to the consistent set is a normal pair of numbers and the solution to the inconsistent set is a pair of infinities. That star is the solution and it's right there at the intersection. So we've studied two examples of simultaneous linear algebraic equations, a consistent set of equations and an inconsistent set. We plotted the lines for the equations and saw that for the consistent set, they intersect, while for the inconsistent set, they do not.

Overdetermined Problems

So is that always true? Do the lines cross if and only if the equations are consistent?
Well, no. We're now going to look at a set of inconsistent equations whose lines do intersect and here they are.
close all
x1=0:0.01:10;
x2=(6-4*x1)/5;
hold off
plot(x1,x2);
grid off
x2=(14-3*x1)/(-2);
hold on
plot(x1,x2);
x1=0:0.01:10;
x2=(19-7*x1)/2;
plot(x1,x2)
we have three different intersections, each of which determines a different solution to a different pair of equations. So these three equations determined three different solutions when we're looking for just one and that's why we say the equations are overdetermined. If all three lines intersected at the same point, the equations would not be overdetermined and that intersection would be the solution, but they don't. So no solution to these three equations exists. That is almost always the case when you have more equations than unknowns.
What would happen if we were to ask the backslash operator to find that nonexistent solution? so let's find out.
A=[4 5;3 -2;7 2];
b=[6;14;19];
x=A\b
x =
3.285283474065138
-1.575392038600725
Once again, MATLAB gives a solution when there is no solution. But this time it doesn't give us infinities, this time the solution looks perfectly normal.
So what is this so-called solution? Well, to answer that let's start by plotting it.
plot(x(1),x(2),'*')
the solution that Mr Backslash found is not at any of the intersections. It's not even on any of the lines. So it doesn't solve even one of the equations. But you have to admit it's pretty close to all of them, and that closeness is no accident. MATLAB solution to a set of over-determined equations is the one that comes the closest to solving all three equations simultaneously. Where closest is defined in a special way. To see what that special way is, let's look at the error in its solution.
e=A*x-b
e =
-0.735826296743073
-0.993365500603137
0.846200241254515
instead of looking at the error with the maximum absolute value, which in this case equals 0.9934 and pertains to just one equation, which happens to be the second one, we're going to look at a measure of error that includes all the equations. It's called the squared error, and it equals the sum of the squares of all the elements of V.
sse=sum(e.^2)
sse =
2.244270205066344
the solution to a set of over-determined equations is defined. It's the one solution that has the least sum of squared errors.This method of solving a set of over-determined equations is called the least squares solution, and it's the one that is most likely to be the best solution for any given problem in engineering or science that involves linear simultaneous equations.
Of course, when there's a true solution as you would expect when there are the same number of unknowns as equations, the error will be very small. To find out what very small means to MATLAB, you can look at the value returned by the built-in function Eps. E-P-S, which is short for Epsilon, the Greek letterthat stands for an arbitrarily small number in mathematical proofs.
eps
ans =
2.220446049250313e-16
In MATLAB, it also stands for a small number, but it's not so arbitrary. Eps is the difference between the number 1.0 and the next largest number that a double precision value can represent. If the value of the sum of squared errors for your solution is more than, say, a 1,000 times EPS, then you can be almost certain that you've either typed something wrong or you're solving a set of over-determined equations.
clear all;
close

Underdetermined Problems

There is not enough information provided by the equations to determine thesolution completely.
Our first example has one equation and two unknowns, and our second one has two equations with four unknowns.
if we solve this by backslash operator,
x2 = −(4x1 − 6)/5 . This relationship shows us that we can choose an infiniteset of values of x1 from −∞ to +∞ , as long as we choose the right x2 to gowith it. If MATLAB is asked to solve for x in this situation, it will find one,and only one of this infinity of solutions, preferring one that includes asmany zeros as possible (one in this case):
A=[2 10];
b=4;
x=A\b
x =
0
0.400000000000000
lets try 2nd example,
A=[2 3 -6 7;3 -2 4 9];
b=[5;14];
x=A\b
x =
0
0
0.646341463414634
1.268292682926830
if you see even one zero in the solution to simultaneous equations, and I mean an exact zero, Then you have probably got yourself fewer equations than unknowns.

Ill-Conditioned vs Well- Conditioned Equations

MATLAB provides a function called cond, which is short for condition, it will look at your matrix and tell you how ill-conditioned it is. Let's try it on a couple of different sets of equations.
These equations look more like what you would expect in real-world problems, because the numbers aren't all integers like in our simplified examples.
clear;
close;
A1=[27.3 59.4;63.2 33.4];
cond(A1)
ans =
2.964025502648970
A2=[41.9 59.1;57.5 81.1];
cond(A2)
ans =
9.457424998909468e+04
bigger is worse. The condition number for each set of equations is the maximum factor by which the percent error can increase from the inputs to the outputs. Since we don't want to increase the percent error from the inputs to the outputs, smaller is better.
if the typical error that you would expect in measured values on the right of the equations is say, one percent for each of these two examples, then the error in the output x for the first example will be no bigger than 2.964 percent. But for the second example, the error in x could be as large as 94,574 percent. The break point from well-condition to ill-conditioned depends on the percent accuracy that you need for the outputs versus the percent accuracy that you can achieve for the inputs.
If you can tolerate an error of roughly three percent and you've got one percent accuracy in your inputs, then the first set of equations is well-conditioned. But there is no conceivable scenario in which you could accept a 94,000 fold magnification of error.
The second set of equations is definitely ill-conditioned. Let's look at some example input errors for these equations and see what sort of output errors we get. Let's solve the first set of equations to start.
b1=[78.1;91.1];
x1=A1\b1
x1 =
0.986116681795473
0.861599572171441
b1_err=b1+[1.0;0.73]
b1_err =
79.099999999999994
91.829999999999998
pct_b1_err=100*norm(b1_err-b1)/norm(b1)
pct_b1_err =
1.031795099347154
So,we about 1% error in input
x1_err=A1\b1_err
x1_err =
0.989621639118166
0.876823724782391
Let's use norm again to calculate the percent change, well,
pct_x1_err=100*norm(x1_err-x1)/norm(x1)
pct_x1_err =
1.193008960292647
our output changed by just under 1.2 percent. According to our theory, this percent change in the output can't be bigger than the condition number for A1 multiplied times the percent change that we made to the input. Let's see that condition number again.
cond(A1)
ans =
2.964025502648970
2.9640*1.0318
ans =
3.058255200000000
Yeah, 1.1930 is definitely smaller than 3.0583.
Lets do the same thing for 2nd equation;
b2=[58.9;80.8];
x2=A2\b2
x2 =
-9.437499999957515
7.687499999969878
b2_err=b2+[1.0;0.73]
b2_err =
59.899999999999999
81.530000000000001
pct_b2_err=100*norm(b2_err-b2)/norm(b2)
pct_b2_err =
1.238236505831397
x2_err=A2\b2_err
x2_err =
-2.466687499989504
1.758937499992558
pct_x1_err=100*norm(x2_err-x2)/norm(x2)
pct_x1_err =
2.389141563620828e+03
2,389.1 percent error, this is a disaster. It means that if the change we made in our input vector were the result of a one percent measurement error in the field, then the answer we would get as output would change by over 2,000 percent.
let check theory again
cond(A2)
ans =
9.457424998909468e+04
cond(A2)*pct_b2_err
ans =
1.171052888481216e+05
Over 94,000. Obviously, 2,389.1 percent output error is way below the limit, which is, condition number times percent input error. Well, it's over 117,000 percent.
What do you do when the condition number is too large?
Add one or more equations.

over-determined system

The simultaneous equations that we get in science and engineering are determined from real-world problems. Typically, a real-world problem allows for the determination of additional equations.
Suppose we had determined one additional equation for the second example, say, this one.
.
Now we have an over-determined system for our third example because we have three equations and just two unknowns.
A3=[41.9 59.1;57.5 81.1; 69.9 31.7]
A3 =
41.899999999999999 59.100000000000001
57.500000000000000 81.099999999999994
69.900000000000006 31.699999999999999
cond(A3)
ans =
4.178061211291642
We can live with that. If we can't live with an upper limit of 4.1781 as a magnification of percent error from input to output, we might be able to look at the problem again and figure out a fourth equation.

Assignement

Problem: Electric circuit

Write a function called voltage that computes the voltages at different junctions. (A,B,C)
after solving we will form three equation
% function sol = voltage(V,R)
%
% M = [R(2)*R(7) + R(1)*R(2) + R(1)*R(7), -R(1)*R(2), 0;
% -R(3)*R(4)*R(8), R(4)*R(7)*R(8) + R(3)*R(4)*R(8) + R(3)*R(4)*R(7) + R(3)*R(7)*R(8), -R(3)*R(4)*R(7);
% 0, -R(5)*R(6), R(6)*R(8) + R(5)*R(6) + R(5)*R(8)];
%
% y = V*[R(2)*R(7);R(4)*R(7)*R(8);R(6)*R(8)];
%
% sol = M\y;
%
% end
R = [1,2,4,5,13,4,8,1];
V = 10;
voltage(V,R)
ans =
6.518574469546110
4.741468104099430
4.152990455263338

Problem: Linear Regression

Given a set of approximate x and y coordinates of points in a plane, determine the best fitting line in the least square sense. Using the standard formula of a line: ax + b = y,compute a and b. That is, write a function called lin_reg that takes two row vectors of the same length called x and y as input arguments(containing x and y coordinates of points)and returns two scalars, a and b specifying the line, as output arguments. Here is an example run:
% function [a, b] = lin_reg(x,y)
%
% M = [x; ones(1,length(x))]';
% s = M \ y';
%
% a = s(1);
% b = s(2);
% end
v = rand(1,200) * 10 - 5;
x = v + randn(1,length(v)) / 2;
y = v + randn(1,length(v)) / 2;
[a b] = lin_reg(x,y)
a =
0.951803312228677
b =
0.062170490642611
plot(x,y,'.');
grid on
hold on
plot([-5 5],a*[-5 5]+b,'lineWidth',2);

Live Scripts

the output of the program will appear right inside the script. Making your program seem to come alive, hence the name live script.
As the name suggests this tool works only with scripts, and not with functions.
To introduce live scripts, let's create an example.
We'll create a script to illustrate how atmospheric concentrations of carbon dioxide also known as CO2, have changed over the past decades. We'll also use a simple prediction model to see where we're headed.
To do all this, we need data. And fortunately we can get it easily. In fact, we've already downloaded the data into an Excel workbook. It came from the Mauna Loa Observatory, which is located on the Big Island of the state of Hawaii. Which we're looking at here from above, courtesy of NASA.
These data have been collected on the slope of Mauna Loa continually. For half a century by the Earth System Research Laboratory, of the National Oceanic and Atmospheric Administration, NOAA. And they're freely available from their website.
Why is CO2 data collected here? Well, it's because up here CO2 from plants, and local car exhaust, and other CO2 sources from big cities, are not a big problem. An prevailing winds constantly blow clean air in from the ocean. Yes, occasionally the active volcano spews out CO2, but it's so rare and so obvious when it does it. Yet the result in corruption can easily be removed from the data.
And we've got the data right here in this Excel workbook, which is named CO2 Mauna Loa.
When we open it, we see four columns. The first is the year, the second is the month. And the 3rd is a fractional number that includes both year and month. And the 4th column is the CO2 data. The reason for the fractional version of year and month, is that it's better for plotting.
As you can see the first month is March 1958. And if you check the decimal part, you'll find that it's the middle of March. In fact, every fractional date is at the middle of a month. Each CO2 reading in this 4th column, is the mean value measured over the month. And the measured value is the number of CO2 molecules per million air molecules. So the mean parts per million of CO2 averaged over March 1958, was 315.71. A new row is added every month. And if we scroll down to the bottom of the spreadsheet, We see that the last of the measurements were made in January of 2020.
You can read all about the data if you follow this link. If you do, you'll see that there's even more data available.

Mauna Loa Carbon Dioxide Concentrations

The Excel spreadsheet CO2 Mauna Loa.xlsx contains the CO2 data from Hawaii provided by the Earth System Research Laboratory of the National Oceanic and Atmospheric Administration (NOAA).

Let's read the data in and plot it:

clear;
close all;
co2=xlsread("CO2 Mauna Loa.xlsx");
year=co2(:,3);
CO2=co2(:,4);
plot(year,CO2,'g');
grid on;
title("Monthly CO2 concentration");
xlabel("year");
ylabel("CO2 (ppm)");
Notice how the data is periodic. That's due to the seasonal variation across the year. But overall the trend is clearly upward.

Smooth the data and plot it with the original data:

But first, we specify a length of 12 for our smoothing window. Meaning that 12 points from the original data would be combined to produce each new point in the smooth data.
window=12;
smoothed=smoothdata(CO2,'movmean',window);
plot(year,CO2,'g',year,smoothed,'k');
grid on;
title(sprintf('monthly CO2 concentration and %d-month smoothing',window));
xlabel("year");
ylabel("CO2(ppm)")
legend('monthly','smoothed','location','best');
As we can clearly see from the plot, the growth is not a straight line.It looks very much like an exponential curve.So let's try an exponential fit.To do that, we'll use a few functions from Matlab's curve fitting tool box, which was introduced and released r 2016 b.
Let's start a new section. Give it the heading, Model the growth of CO2 as exponential and compare it to smooth data.

Model the growth of CO2 as exponential and compare to smoothed data:

The first three lines use functions from the curve fitting tool box. Fit type is first, and its first input argument is a string stating the formula for the type of curve that we want to fit to the data. That formula has the form a plus b times e raised to the power of n multiplied by the time elapsed since 1958 in years. Which is a general exponential function for growth since the first concentration of CO2 was measured at Mauna Loa.
model=fittype('a+b*exp(n*(year-1958))','independent','year');
options=fitoptions('method','NonlinearLeastSquares','StartPoint',[0,0,0]);
CO2_fit=fit(year,CO2,model,options);
CO2_model = CO2_fit.a + CO2_fit.b*exp(CO2_fit.n*(year-1958));
plot(year,smoothed,'k',year,CO2_model,'r--'); grid on;
title("CO2 Smoothed and CO2 Model");
xlabel("Year"); ylabel("CO2 (ppm)");
legend('Smoothed','Model','location','best');
The first three lines use functions from the curve fitting tool box. Fit type is first, and its first input argument is a string stating the formula for the type of curve that we want to fit to the data. That formula has the form a plus b times e raised to the power of n multiplied by the time elapsed since 1958 in years. Which is a general exponential function for growth since the first concentration of CO2 was measured at Mauna Loa.
The second two arguments tell the function that year is the independent variable, leaving a, b, and n as constants that must be determined by Matlab. Its output is the model to be fit to the data.
Fit options is second. Its first two arguments tell the function that the method for determining the best values of a, b, and n, is non-linear least squares.The last two arguments give the initial values for a and b and n, that Matlab will use in an internal iterative method for finding the values for these three constants that will minimize the sum of squared differences between the fit and the input data. Those three values are 0, 0, and 0. Its output is a variable that captures those options.
Its output is a structure whose fields describe the model that has been fit to the CO2 data.
The next line uses the fields a, b, and n, of the struct CO2 underscore fit to calculate values of the model for every element of the variable year. We assign those values to the vector CO2_model.
Now that we have values for the model, we can plot them. To get a visual idea of how well the model fits the smoothed data, we plot both of them on the same graph with the smooth data in black and the exponential model as a red dashed line.
As before, we turn on the grid, then decorate the plot with titled labels for the axes and a legend.
Well, that's an amazing fit. You're looking at a real world prime example of exponential growth.
you'll get another struct, and its fields contained several statistics. One of them is the root mean squared error, and it equals 2.2. Meaning, roughly, that the average error is two parts per million, or about six tenths of 1% of the mean parts per million. That I mentioned that this is a good fit.
Now that we've got such a good close fitting model, we can use it to extrapolate out to that mid current century mark. To do that, we'll simply plot the model from January 2022 to January 2050, in a new section.

Extend exponential model from end of monthly data to future:

future= year(end):1/12:2050;
CO2_future = CO2_fit.a + CO2_fit.b*exp(CO2_fit.n*(future-1958));
plot(year,CO2,'g',future,CO2_future,'r--'); grid on;
title("CO2 Measured and Projected to the Future");
xlabel('Year'); ylabel("CO2 (ppm)");
legend('Monthly',"Future",'location','best');
The first line gets us the dates in the future with a step equal to one month.
The second line is similar to the same command we used in the previous section, but this time year is replaced by future.
On the third line, we plot the actual data from March 1958 through January 2020, and the model from January 2020 through January of 2050.
fprintf('The predicted CO2 concentration for January %.0f is %0.f ppm.',...
ceil(future(end)),CO2_future(end));
The predicted CO2 concentration for January 2050 is 510 ppm.
help ceil
ceil Round towards plus infinity.
ceil(X) rounds the elements of X to the nearest integers
towards infinity.

See also floor, round, fix.

Reference page for ceil
Other functions named ceil

Practice Quiz

% Pramod Kumar Yadav
%[linkedin,insta,fb,github]@iampramodyadav
1.Question How is a Live Script different from a regular script?
A Live Script combines regular text and MATLAB code. It allows the user to create a nicely formatted document with embedded code and the output of the code such as plots.
2.Question What are the parts that make up a Live Script called?
Sections
3.Question What is the file extension of Live Scripts?
mlx

Error Handling

we've dealt with errors in two ways. We've handled them in one way when we're getting the bugs out of our program so that it'll work correctly. And a second way after the program is completed and someone runs it, he doesn't know how it works. During debugging, when MATLAB gives us one of its dreaded red error messages, we've tried to determine the cause of the error from the message. Here's an example, it's a function called indexing with an error in it that's so simple you might be able to see it before I even run it.
% function diff=indexing(N)
% rng(0);
% v=rand(100,1,N);
% diff=[];
% for ii=1:length(v)
% diff(end+1)=v(ii+1)-v(ii);
% end
% end
When ii equals the length of v in this for loop, which in this case is 10, the ii + 1 is asking for the 11th element in a 10 element list.The for loop should stop at length v- 1 instead of length v. It's that simple, let's fix that.I just save it and run it again.
% function diff=indexing(N)
% rng(0);
% v=randi(100,1,N);
% diff=[];
% for ii=1:length(v)-1
% diff(end+1)=v(ii+1)-v(ii);
% end
% end
indexing(10)
ans = 1×9
9 -78 79 -28 -54 18 27 41 1
In many cases, the user may inadvertently give a function input that causes an error and MATLAB's generic error message, which would be adequate for the person who wrote the code may not give the in. Formation that the user needs to understand what they've done wrong. And that's especially true with more complicated code that calls, other functions and subfunctions, maybe indexes into multiple arrays. And by the way, people who are unfamiliar with the inner workings of the code just might include the person who wrote it, when that person runs it long after writing it, and has forgotten what's going on under the hood. For that situation, we've dealt with errors in another way. We've tried to think of all potential errors that users might make, and added code in our program to handle them.
here's a function that does just that, it's called my_harmonic.
it calculates the nth term in the harmonic series, which is the sum of the inverses of the first n integers.
% function h=my_harmonic(n)
% if ~isscalar(n)||n<1||n~=floor(n)
% error('positive integer input expeted')
% end
% h=1;
% for ii=2:n;
% h=h+1/ii;
% end
% end
my_harmonic(10)
ans = 2.9290
this red error message showed up because 0 isn't positive, let's try a fractional input. Which is not an integer.
And a vector which is not a scalar.All of these violations of the input requirements by the user, cause the error function to quit the function that calls it and print its error message.
And mind you doesn't quit just the function that calls it quits the entire program, no matter how deeply nested the call to the error function is, here's an example to show that.
% function h=harmonic_chain(n)
% h=sub_harmonic(n);
% end
%
% function h=sub_harmonic(n)
% h=my_harmonic(n);
% end
%
harmonic_chain(10)
ans = 2.9290
called harmonic_chain with value 2.3, which causes an error, and it called sub_harmonic with the same input, which called my_harmonic with the same input. The error function was called while my_harmonic was running, so we have a chain of function calls harmonic_chain to sub_harmonic, to my_harmonic, which throws the error. When the error was thrown, my harmonic was on top of the activation record stack. And all this red stuff with some blue stuff sprinkled in, tells us not only the error message that we gave to the air function, which is, positive integer expected dot dot dot. But it also tells us, what line each function on the stack was executing, when the error function was called. The greater than sign here, tells us that the function subharmonic, is contained in the harmonic chain M file.
What if I don't want a program to quit every time some function in it throws an error?
Well, one thing we can do, is change the way it checks its input, so that instead of calling the error function, you just converts bad input, into the good input. Like this for example. In my harmonic version 2
% function h=my_harmonic_v2(n)
% if ~isscalar(n)
% n=n(n); %convert to scalar
% end
% n=max(1,round(abs(n))); %convert to integer
% h=1;
% for ii=2:n;
% h=h+1/ii;
% end
% end
my_harmonic_v2(2)
ans = 1.5000
my_harmonic_v2(2.3)
ans = 1.5000
my_harmonic_v2([2 3 5])
ans = 1.5000
it repairs the faulty input and drives on. and it works well enough in this case, but there are problems with this approach.
The first problem is that in some cases, the approach of repairing faulty input, can produce unanticipated outputs, that can themselves cause an error in the caller. If for example, the caller used as input a two element vector like R2 3 argument.You would likely expect the output to be a two element vector. In that case, the collar might try to access the second element of the output argument, which wouldn't exist. Thus, the caller would cause an error, which would stop the program.
A second problem is that functions can fail in so many ways that we need to have too many statements, to check for them, and in any case it's doubtful that we could anticipate every one of them. And one got through, the program would halt anyway.
And the third problem is that when our function calls a function that we didn't write, for example, one of my labs built in functions. Or uses a Matlab operator, that function or that operator may throw an error, and stop our program.

Exception Handling

there's a beautiful mechanism device for exactly such cases. It's called, exception handling, and it's available in Matlab and most other modern computer languages. When we say exception handling, the word exception means an error that occurs in some part of a program. And exception handling means running code that is designated as being executed only when an exception occurs in that part of the program.
% function h=robust_harmonic_chain(n)
% h=sub_harmonic(n);
% end
% function h=sub_harmonic(n)
% try
% h=my_harmonic(n);
% catch
% h=[];
% end
% end
robust_harmonic_chain(2)
ans = 1.5000
robust_harmonic_chain(2.3)
Warning: wrong input provided
ans =

[]
robust_harmonic_chain([2 3])
Warning: wrong input provided
ans =

[]
The code from try through catch to end is a try catch block, and it performs exception handling.
In this simple example, each block has just one line, but actually there's no limit on the number of lines you can have in there if an error is thrown while my harmonic is executing, instead of halting the program, robust_harmonic_chain will run the code in the catch block.Which in this case just sets h to the empty array.
To use the terminology of exception handling, any error that occurs in the try block is an exception, and the running of the code in the catch block, which is executed only when an error occurs in the try block, is exception handling.
In more complicated cases, for example, when it's not obvious from the return value that there was an issue, we may want to provide some sort of explicit message to the user that something was not quite right.
We can print a message with a printf or we can use Matlab's built-in warning function, which is similar to the error function in that it prints out a message along with its location in the code, but different from the error function in that it does not quit the program.
% function h=robust_harmonic_chain(n)
% h=sub_harmonic(n);
% end
% function h=sub_harmonic(n)
% try
% h=my_harmonic(n);
% catch
% h=[];
% warning('wrong input provided')
% end
% end
robust_harmonic_chain(2)
ans = 1.5000
robust_harmonic_chain(2.3)
Warning: wrong input provided
ans =

[]
robust_harmonic_chain([2 3])
Warning: wrong input provided
ans =

[]
instead of red, it's printing in this sort of orangish yellow. Red means stop, orangeish yellow means caution.
What if I only wanted to handle One kind of exception, or if I wanted to handle different exceptions differently.

Matrix muliplication example

Assertion

there is one more concept related to error handling that we want to introduce. It's called an assertion.
an assertion is that it's an assumption about variables in your code, and if that assumption is violated, MATLAB will let you know. Assertions are used while you're developing your program to help you exterminate bugs. Once you've tested your program thoroughly, you can remove the assertions or comment them out.
example-
% function assert_example
% %do some computation
% x=abs(randn);
% %do some more computation
% assert(x>=0);
% end
Let's run it and start by seeing what happens. If everything is fine.
assert_example
Nothing happens and that's what we want. We don't want to see any errors. Now let's create our own ABS function inside assert example.
% function assert_example
% %do some computation
% x=abs(randn);
% %do some more computation
% assert(x>=0);
% end
% function x=abs(x)
% x=-1*x; %incorrect implementation
% end
So when we happen to get a negative value from our bad ABS function over here. We catch the air because it violates the assertion.
Well nonsquare matrix, maybe vicious but it will not kill you. Still put in assertion in your code and thank me later. Maybe it'll never throw an error, but if it does, you just saved yourself minutes, hours, or potentially days, by immediately knowing that the cause of your mysterious error. Is it somehow that matrix turned out not to be square after all?
Or X turned out to be negative. Even though you were absolutely sure about it.

Summary

Algorithmic Complexity

Fibonacci Series

in this first lecture of this lesson, we'll focus on something fundamental called Algorithmic Complexity. It's actually an entire branch of computer science that deals with analyzing the performance of algorithms. It tries to answer questions such as, how much time does this algorithm take? How much memory does it need?
Well, that depends on the computer and it depends on the inputs. It depends on a lot of things but what matters most is the algorithm itself. The same thing holds for the memory and needs.
let's start with a simple example to illustrate why the algorithm chosen to solve a problem really matters. Let's revisit our favorite mathematical series from
high school, the Fibonacci Series.
% function f=fibo(n)
% if n<=0
% f=1;
% else
% f=fibo(n-2)+fibo(n-1);
% end
% end
tic; fibo(20), toc;
ans = 17711
Elapsed time is 0.011604 seconds.
tic; fibo(25), toc;
ans = 196418
Elapsed time is 0.016770 seconds.
tic; fibo(30), toc;
ans = 2178309
Elapsed time is 0.166891 seconds.
tic; fibo(35), toc;
ans = 24157817
Elapsed time is 2.156503 seconds.
tic; fibo(37), toc;
ans = 63245986
Elapsed time is 6.007840 seconds.
tic; fibo(40), toc;
ans = 267914296
Elapsed time is 23.654518 seconds.
Our function seems to be slowing down here.
Let's instrument our function with a persistent variable to count how many recursive calls in total we make. Here it is. Fibo_count.
% function [f cnt] = fibocnt(n)
% persistent count; % must specify persistent
% if isempty(count) % first time it is set to [] by MATLAB
% count = 1; % so we set it to 1
% else
% count = count + 1; % subsequent times, we increment by 1
% end
% if n <= 2
% f = 1;
% else
% f = fibocnt(n-2) + fibocnt(n-1);
% end
% cnt = count; % a pesistent variable cannot be an output argument
% end
A quick refresher on persistent variables. They retain their values across function calls even though they're local. When first called, their value is the empty array. That's why we have is empty here, and an if statement to initialize the count to one.
A persistent variable can't be an output argument. That's just a rule. So we add an extra variable called cnt down here to serve as the second output of the function. Let's try that. No surprise here, you need to remember to reset the persistent variable between calls like this.
clear fibocnt
[f c]=fibocnt(2)
f = 1
c = 1
clear fibocnt
[f c]=fibocnt(10)
f = 55
c = 109
clear fibocnt
[f c]=fibocnt(20)
f = 6765
c = 13529
clear fibocnt
[f c]=fibocnt(25)
f = 75025
c = 150049
clear fibocnt
[f c]=fibocnt(35)
f = 9227465
c = 18454929
To compute the 35th element, our simple, and elegant seven line function made over 18 million function calls, instead of just doing 34 additions. Wow. That is truly inefficient, and wasteful. What's wrong here?
No. It is our algorithm. It's actually our approach to the problem. We didn't realize that doing that this particular way, that is calling the function itself for n minus two, and n minus one, ends up making a lot of unnecessary work. So our algorithm is bad. Let's qualify that statement. Function seemed to work fine for small ends, right? Up to say, n equals 25, it finished with a little delay. But once n started to get bigger, let's say n equals 35, then it slowed down.
We can modify the requirements, and return the entire series, instead of just the last element, like this. We make only a single recursive call to figure out the Fibonacci series for n minus one. Save it in a vector. Then simply add the two last elements to get the nth element of the series. If you try this, you'll see that it's lightening fast.
% function f=fibo_list(n)
% if n<=2
% f=ones(1,n)
% else
% f=fibo_list(n-1);
% f=[f f(end-1)+f(end)]
% end
% end
tic; fibo_list(35), toc;
ans = 1×35
1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765 10946 17711 28657 46368 75025 121393 196418 317811 514229 832040 1346269 2178309 3524578 5702887 9227465
Elapsed time is 0.005762 seconds.
tic; fibo(35), toc;
ans = 24157817
Elapsed time is 2.162322 seconds.
here we can see fibo_list is lightening fast than fibo.
If returning the entire series is not acceptable, we can just hide this version as a sub-function.
% function f=fibo_last(n)
% f=fibo_list(n);
% f=f(end);
% end
% function f=fibo_list(n)
% if n<=2
% f=ones(1,n);
% else
% f=fibo_list(n-1);
% f=[f f(end-1)+f(end)];
% end
% end
fibo_last(35)
ans = 9227465
tic; fibo_last(50), toc;
ans = 1.2586e+10
Elapsed time is 0.000596 seconds.
So, it is so fast.
We've achieved our goal, which was to demonstrate that the algorithm chosen to solve a given problem, can have a huge impact on the speed of the program we end up with.

reverse the elements of a vector

Let's say our task is to reverse the elements of a vector.
v=1:10
v = 1×10
1 2 3 4 5 6 7 8 9 10
v_reversed=v(end:-1:1)
v_reversed = 1×10
10 9 8 7 6 5 4 3 2 1
v_reversed=flip(v)
v_reversed = 1×10
10 9 8 7 6 5 4 3 2 1
let's pretend that these built-in reversing methods don't exist, and write our own function.Also, let's forget recursion.First, we need to come up with an algorithm.Let's make our problems slightly more difficult and say that we want to solve this problem in place.That is, we don't want to create another vector and simply copy the elements from one to the other in reverse order.Instead, we want to move the elements within the original vector.
% function v=my_flip(v)
% for ii=2:length(v)
% tmp=v(ii);
% for jj=ii:-1:2
% v(jj)=v(jj-1);
% end
% v(1)=tmp;
% end
% end
v=1:10;
my_flip(v)
ans = 1×10
10 9 8 7 6 5 4 3 2 1
lets check for bigger list and see if it slows down
tic; my_flip(1:100),toc;
ans = 1×100
100 99 98 97 96 95 94 93 92 91 90 89 88 87 86 85 84 83 82 81 80 79 78 77 76 75 74 73 72 71 70 69 68 67 66 65 64 63 62 61 60 59 58 57 56 55 54 53 52 51
Elapsed time is 0.001783 seconds.
tic; my_flip(1:1e4);,toc;
Elapsed time is 0.326099 seconds.
29 hundredths of a second to reverse a 10,000 element vector. It's hard to tell whether that's good or bad.
But before we go any further, let me show you a better way of measuring the running time of a function.
tic, toc gives a good idea of the running time, but it measures the clock time between tic and toc which may include time devoted to other things your computer is doing while running my_flip.

timeit function

A better approach is to use MATLAB's timeit function. To measure the time that my_flip takes, it'll do it as its name says, it'll time it. And it does a better job because it comes closer to measuring just the time used by the function you're evaluating, by reducing the impact of extraneous time for swapping in and out. And housekeeping and stuff that your operating system may decide to do in the middle of running my_flip. That makes it hard to compare functions that don't take a lot of time. time_it uses tic and toc, but to get a better estimate. It runs the function multiple times and returns the median running time, which is robust against a run or two in which the CPU does non-my_flip work.
An extra benefit is the time it requires that we give it a handle to the function we want to time.
time_it does not allow you to include arguments for the function you're testing.
So how are we going to time my_flip, which does require an argument?
Well, here comes the anonymous function to the rescue. Here's how it works.
t10=timeit(@() my_flip(1:1e4))
t10 = 0.3411
t100=timeit(@() my_flip(1:1e5))
t100 = 29.8582
t100/t10
ans = 87.5435
t20=timeit(@() my_flip(1:2e4))
t20 = 1.1346
t40=timeit(@() my_flip(1:4e4))
t40 = 4.6472
t20/t10
ans = 3.3265
t40/t10
ans = 13.6256
So when I doubled the size of the input from 10,000 to 20,000, the time grew by a factor of about 4. When I quadruple the size of the input the time grew by a factor of about 16. It seems that there's a quadratic relationship here between the growth of the number of elements, and the growth in time required by our algorithm to calculate the output.
So if the outer loop iterates, about n times, and the inner loop iterates an average of about n over 2 times for each of those in outer loop iterations. The total number of inner loop iterations will be approximately n times n over 2, which equals n squared over 2.
If our algorithm analysis tells us that our approach will take too long, then we know that we need to find a better solution. While we're on the subject of a better solution, you've probably already figured out that there is a much simpler algorithm to flip the order of elements of a vector in place. We can simply swap the first and last elements, and then the second and next to last elements, and so on, until we reach the middle point.
Here's an implementation of that algorithm.
% function v=fast_flip(v)
% for ii=1:ceil(length(v)/2)
% tmp=v(ii);
% v(ii)=v(end-ii+1);
% v(end-ii+1)=tmp;
% end
% end
tf100=timeit(@() fast_flip(1:1e5))
tf100 = 0.0030
Let's see how fast this function gets the job done. I've written a function to help with that called test_fast_flip.
% function test_fast_flip
% % Make a list of vector lengths:
% N = 1e6*(1:10);
% % Measure fast_flip time for a vector of each length:
% for ii = 1:length(N)
% t(ii) = timeit(@() fast_flip(1:N(ii)));
% fprintf('Time for %8d elements = %.4f\n',N(ii),t(ii));
% end
%
% % Plot time versus input size with a line and asterisks:
% plot(N,t,N,t,'r*');
It starts by constructing a list of 10 vector links spread uniformly from 1 million to 10 million. And it calls timeit with vectors of these lengths. Prints the vector lengths, And the times and then plots execution time versus vector input size. Let's run that.
test_fast_flip
Time for 1000000 elements = 0.0113
Time for 2000000 elements = 0.0246
Time for 3000000 elements = 0.0401
Time for 4000000 elements = 0.0521
Time for 5000000 elements = 0.0635
Time for 6000000 elements = 0.0776
Time for 7000000 elements = 0.0912
Time for 8000000 elements = 0.1006
Time for 9000000 elements = 0.1187
Time for 10000000 elements = 0.1268
So this second flipping algorithm is simpler and faster. And as you can see from the plot.The time is roughly a linear function of the input size.
There's a little bit of startup time on the order of 5,000th of a second. And then the calculation time itself is very linear with a little fluctuation because timeit doesn't completely eliminate the system's use of the CPU for other purposes.
Could we do better than linear for reversing in place?
No, we have to move each element. So there's a little room for improvement for this algorithm.
Nevertheless, the built in MATLAB function still does better than our solution.
timeit(@() flip(1:1e7))
ans = 0.1163
Why is it better?
Well they use a similar algorithm, that's also linear. But the built-in functions are really optimized to work as fast as possible using low level features of your computer. In fact flip has many other built in functions of MATLAB, it's not even implemented in MATLAB itself.
we studied in this section
analysis of two algorithm for reversing the element of a vector

Complexity notation

The branch of computer science dedicated to the study of algorithmic complexity is called complexity theory or analysis of algorithms, or just algorithms.
But are there any problems where we can do better than order in? Yes, there are. Consider the problem of computing the sum of the first N positive integers, like say, from 1-100.
n*(n+1)/2
But are there any problems where we can do better than order in? Yes, there are. Consider the problem of computing the sum of the first N positive integers, like say, from 1-100. O(c)

sequential search

are there problems whose algorithmic solutions have a complexity somewhere between constant and linear time? Yes, there are.
Consider searching for a number in a vector. More precisely, the task is to return the index of the first occurrence of a given value in a vector of elements.
there's a built-in function that we could use. It's called find(v,1). It's not quite all we need for this task because it doesn't actually search for a given value. Instead, it returns the indices of the first K non zero values, where K is the second argument. But we can use a logical vector to make this work for us anyway.
v=randi(100,1,20)
v = 1×20
98 96 49 81 15 43 92 80 96 66 4 85 94 68 76 75 40 66 18 71
find(v==28)
ans =

1×0 empty double row vector
let's forget that there's a built-in function that we can use. Here's our own search function.
my_search(v,98)
ans = 1
my_search(v,28)
ans = 0
In this case, the worst case is when the element is not there, because it forces the algorithm to examine every single element hence the linear time complexity.
But I promised you something between linear time and constant time, didn't I? Well, what if the vector was sorted?
Let's say it's a sorted vector in increasing order, then can we come up with a better algorithm, that is a faster algorithm for finding a number in a sorted list?

Binary Search

If a list has been sorted before the search is undertaken, searches can be carriedout much more quickly. The search method that should be used on asorted list is the binary search. It is more complicated than the sequentialsearch method, but the complication is well worth it: A binary search requires no more than 40 comparisons to find a number in a list of one million numbers!
% function index = binary_search(v,e,first,last)
% if nargin < 3
% first = 1;
% last = length(v);
% end
% mid = fix( (first + last)/2 );
% if ~(first <= last)
% index = 0;
% elseif e == v(mid)
% index = mid;
% elseif e < v(mid)
% index = binary_search(v,e,first, mid-1);
% else
% index = binary_search(v,e,mid+1, last);
% end
clear
v=1:500;
binary_search(v,28)
ans = 28
binary_search(v,0)
ans = 0
as zero not in the list
Let's compare run times of the two versions on the worst case; the sequential search in the function my_search, and this algorithm.
First, we'll put the integers one through 100 million on the list. Then we'll search for zero, which is the worst case because it's not on the list.
v=1:1e8;
t_my_search=timeit(@() my_search(v,0))
t_my_search = 0.3348
t_binary_search=timeit(@() binary_search(v,0))
t_binary_search = 2.4922e-06
ratio=t_my_search/t_binary_search
ratio = 1.3433e+05
so here binary search in 76 thousand time faster than sequential search. (in one million search)
but why is the binary search so much faster?
Well, by cutting the problem in half every time, only a few steps are needed to cut down the list to a length of only one element. How many steps? Well, if you think about it, it's about equal to the base two logarithm of the length of the vector.
For example, 1,024 is 2_10. If we have a list of length 1,024 and we keep splitting it in half, we'll get to a one element vector in 10 steps. For a billion element list, we'll get there in 30 steps.
Such an algorithm has logarithmic time complexity or order log N. O(logN)
If we use the sequential search algorithm on a one billion element vector looking for something that's not on there, we need to take a billion steps. The binary search on the other hand would need only 30 steps.
on an unsorted list, In that case, your only choice is to plod through that list item by item with sequential search. Sure, it's order N instead of order logN, but if the list is unsorted, what are you going to do? You're stuck with linear plodding. What about this idea? Maybe we could sort the list first and then use that lightning-fast binary search. Well, that's a good idea if you need to search for lots of numbers on that list.
if you need to find just one number or just a few numbers because sorting the list takes longer than sequentially searching the list, even for short lists. It gets worse for long lists because the time required to sort the list grows faster than the time required to sequentially search a list. There are many good sorting algorithms available. But for general sort, the most efficient algorithms are all order NlogN. O(NlogN)
That's not as good as the order N complexity of that plotting sequential search.
So far, we've seen algorithms with constant logarithmic, linear, and quadratic time complexities; and we've learned that general sort
algorithms have NlogN complexity. We can do better than constant.
But are there some problems whose best solutions are worse than quadratic? Yes, indeed.
The best solutions to many important problems have worse, many times, much worse time complexity. In fact, there are problems that can't be solved in the world's fastest computers in a reasonable time at all, no matter what algorithm you use.

traveling salesperson problem

Let's consider one, the traveling salesperson problem. Given a country with many cities and various roads connecting the cities, what's the shortest path a person could take to visit each city and return home?
The simplest worst case is when the shortest road connecting each pair of cities doesn't pass through any other cities. It's easy to see that trying out all permutations would require N factorial steps, which is much worse in order N squared. It would become impractical for even 20 cities. Better algorithms have been found, but the best is still no better than order 2 to the Nth power. O(2^N)
Well, we've now looked at problems with the sixth time dependencies that are most commonly taught when introducing algorithmic complexity.
Three of them are powers of N, and since powers of N are examples of polynomials in N, they are said to belong to the set of complexities that grow in polynomial time.
Two of these complexities involve the log of N. If you're wondering why we don't show the base of the log, that's because changing the base is just the same as changing the constant C. The base has no effect on the complexity expression.
The worst complexity of the six we considered 2^N is an example of a category that's said to grow in exponential time.

matrix multiplication

But we're going to squeeze in the complexity of one more problem because it's of fundamental importance to MATLAB, whose name means Matrix Laboratory. That problem is matrix multiplication. For the sake of simplicity, take an M-by-M matrix and multiply it by another M-by-M matrix. The result is going to be M-by-M as well.
That's M multiplications, and we sum up the results with M additions. How many output elements do we have?
Well, M squared. All together, we need to make M times M squared, which equals M cube multiplications, and the same number of additions. That sounds like an algorithm with M cubed complexity. Indeed, it's customary to state the complexity of matrix multiplication in terms of the matrix dimension as being order M cubed. But to be fair to the algorithm, we should remember that the number of inputs N is itself equal to 2M squared. In terms of the number N of inputs, the number of operations is actually proportional to N _three-halves. That sounds more like N_three-halves complexity, and indeed the algorithm is order N_three-halves. O(N^3/2)
Here's an implementation called matmul. We'll confine our testing of matmul to square matrices, but it works on general matrix shapes. It starts by examining the two inputs to make sure they're compatible matrices, and then comes the computation work. The telltale sign as far as complexity is concerned, is that we have a set of three nested for loops, each of which runs from one to one of the dimensions of one of the matrices.
% function C = matmul(A,B)
% [rowA colA] = size(A);
% [rowB colB] = size(B);
% if ~ismatrix(A) || ~ismatrix(B)
% error('Function matmul works with matrices...');
% elseif colA ~= rowB
% error('Inner dimensions must agree!');
% end
%
% C = zeros(rowA, colB);
% for ii = 1:rowA
% for jj = 1:colB
% for kk = 1:colA
% C(ii,jj) = C(ii,jj) + A(ii,kk) * B(kk,jj);
% end
% end
% end
% end
To test it, I've written a function called TEST_MATMUL.
% function t = test_matmul(M,matrix_class)
% %TEST_MATMUL matmul run time for MxM matrices
% % TEST_MATMUL(M) M is a vector of matrix
% % dimensions. Run times are returned and
% % are plotted versus M-cubed along with
% % a fit line of the form: a*M^3 + b.
% %
% % TEST_MATMUL(...,MATRIX_CLASS) MATRIX_CLASS
% % is a string giving the class of matrices
% % constructed as input to matmul. Default
% % is double.
% %
% if nargin < 1, M = 100*(1:10); end
% if nargin < 2, matrix_class = "double"; end
% max_val = 99; % <= 2 digits for inspecting small matrices
% t = zeros(length(M),1);
% for ii = 1:length(M)
% A = randi(max_val,M(ii),matrix_class);
% B = randi(max_val,M(ii),matrix_class);
% t(ii) = timeit(@() matmul(A,B));
% fprintf('M = %d, t = %.4d\n',M(ii),t(ii));
% end
% % Fit data to M^3 dependence
% p = polyfit(M.^3,t,1); % straight-line fit
% t_fit = polyval(p,M.^3);
% % Plot time points and straight-line fit
% plot(M.^3,t,'b*',M.^3,t_fit,'--');
% grid on
% title_str = ...
% sprintf('MxM-matrix-multiplication run time vs M-cubed for %ss',matrix_class);
% title(title_str);
% xlabel('M^3');
% ylabel('time (s)');
% legend('data','fit','Location','SouthEast')
% end
test_matmul(100*(1:3))
M = 100, t = 8.2190e-03
M = 200, t = 4.6856e-02
M = 300, t = 1.8261e-01
ans = 3×1
0.0082
0.0469
0.1826
when you run it with just one argument, which is a list of matrix dimensions, it'll form pairs of square matrices of those dimensions, feed them sequentially to matmul, and then time it on each pair. Finally, it plots the time versus M cube, and to make it easy to look at the resulting plotted time points and decide whether the plot is proportional to M cubed, the function also plots a dashed best fit straight line. If the time is indeed proportional to the cube of the dimension, then uploaded runtimes will lie on that line, or at least very close to it. Let's call it with seven dimensions.
test_matmul(100*(1:7))
M = 100, t = 6.9329e-03
M = 200, t = 5.1772e-02
M = 300, t = 1.7585e-01
M = 400, t = 4.6864e-01
M = 500, t = 9.2230e-01
M = 600, t = 1.4735e+00
M = 700, t = 2.3579e+00
ans = 7×1
0.0069
0.0518
0.1759
0.4686
0.9223
1.4735
2.3579
here's our result. This is the time on the y-axis. This is M cubed on the x-axis. Friends, this looks pretty linear to me. Our experiment bears out our analysis of the algorithm. It's proportional to M cubed.
What about MATLAB's algorithm? MATLAB itself uses an order M-cubed algorithm, but not the naive version we use in MATMUL. Like flip, which we talked about at the end of our previous lecture, is not written in MATLAB. It's also optimized to use lots of features of today's modern micro-processors, such as carrying out more than one multiplication at a time. Get ready for the amazing performance of MATLABs matrix multiplication,
A=randi(99,700);
B=randi(99,700);
timeit(@() A*B)
ans = 0.0220
0.0168 seconds. Well, that's considerably faster than the 2.4 seconds we got with MATMUL for a problem of the same size.
lets change the test_matmulm to check A*B
% function t = test_matmul(M,matrix_class)
% %TEST_MATMUL matmul run time for MxM matrices
% % TEST_MATMUL(M) M is a vector of matrix
% % dimensions. Run times are returned and
% % are plotted versus M-cubed along with
% % a fit line of the form: a*M^3 + b.
% %
% % TEST_MATMUL(...,MATRIX_CLASS) MATRIX_CLASS
% % is a string giving the class of matrices
% % constructed as input to matmul. Default
% % is double.
% %
% if nargin < 1, M = 100*(1:10); end
% if nargin < 2, matrix_class = 'double'; end
% max_val = 99; % <= 2 digits for inspecting small matrices
% t = zeros(length(M),1);
% for ii = 1:length(M)
% A = randi(max_val,M(ii),matrix_class);
% B = randi(max_val,M(ii),matrix_class);
% t(ii) = timeit(@() A*B);
% fprintf('M = %d, t = %.4d\n',M(ii),t(ii));
% end
% % Fit data to M^3 dependence
%
% p = polyfit(M.^3,t',1); % straight-line fit
% t_fit = polyval(p,M.^3);
% % Plot time points and straight-line fit
% plot(M.^3,t,'b*',M.^3,t_fit,'--');
% grid on
% title_str = ...
% sprintf('MxM-matrix-multiplication run time vs M-cubed for %ss',matrix_class);
% title(title_str);
% xlabel('M^3');
% ylabel('time (s)');
% legend('data','fit','Location','SouthEast')
% end
test_matmul(100*(1:7))
M = 100, t = 5.1224e-03
M = 200, t = 4.6745e-02
M = 300, t = 1.6290e-01
M = 400, t = 4.1038e-01
M = 500, t = 8.3472e-01
M = 600, t = 1.5607e+00
M = 700, t = 2.3161e+00
ans = 7×1
0.0051
0.0467
0.1629
0.4104
0.8347
1.5607
2.3161
If I look at the plot, that looks pretty much like M-cubed. It's a little more uneven because the times are so short but that looks like M-cubed to me. Both the built-in MATLAB multiplication operator and our MATMUL show approximately order M-cubed behavior. But this is a good reminder that complexity analysis doesn't tell how fast one algorithm is relative to another on a given problem. It tells you only how execution time varies as the size of the problem grows for a given algorithm.

Assignment

Problem 1: Recursion revisited

Write a recursive function in a more efficient way.
% function y=e_reversal(v)
% y=[];
% if length(v)<= 1
% y=v(1);
% else
% k=v(1:ceil(length(v)/2));
% b=v(ceil(length(v)/2)+1 :end);
% y=[e_reversal(b) e_reversal(k)];
%
% end
% end
tic; A1 = e_reversal(1:1e6); toc
Elapsed time is 1.903745 seconds.
it is able to flip large vector.
tic; A1 = e_reversal(1:1e4); toc
Elapsed time is 0.015971 seconds.
tic; A2 = reversal(1:1e4); toc
Elapsed time is 0.698226 seconds.
time taken is very less compare to initial recursion function.
oficial solution
% function v = reversal2(v)
% if length(v) > 1
% ii = round(length(v) / 2);
% v = [reversal2(v(ii+1:end)) reversal2(v(1:ii))];
% end
% end
tic; A1 = reversal2(1:1e6); toc
Elapsed time is 1.847476 seconds.
this is more efficient that my solution.

Problem 2: Fibonacci profiler

Write a Fibonacci recursive function in a more efficient way.
% function [f, trace] = my_fibo_trace(n, trace)
% trace = [trace,n];
% if n<=2
% f = 1;
% else
% [f1, trace] = my_fibo_trace(n-2, trace);
% [f2, trace] = my_fibo_trace(n-1, trace);
% f = f1+f2;
% end
% end
[f trace] = my_fibo_trace(6,[])
f = 8
trace = 1×15
6 4 2 3 1 2 5 3 1 2 4 2 3 1 2
histogram(trace)
oficial solution
% function [f, v] = fibo_trace(n,v)
% v(end+1) = n;
% if n == 1 || n == 2
% f = 1;
% else
% [f1, v] = fibo_trace(n-2,v);
% [f2, v] = fibo_trace(n-1,v);
% f = f1+f2;
% end
% end
Problem-3

Efficiency in Practice

Example
% function A = rooting_v1(v,w)
% A = zeros(length(v),length(w));
% for ii = 1:length(v)
% for jj = 1:length(w)
% A(ii,jj) = nthroot(v(ii),ii) * nthroot(w(jj),jj);
% end
% end
% end
Here's an example to illustrate the point. It's called rooting _v1 because it finds roots of numbers and because it's version 1 of our algorithm. The function takes two vectors of the same length and creates a matrix A as output. We've eliminated checks on the inputs to focus on the algorithm itself. The output matrix A has elements that are products of various roots of the elements of the two vectors.
For example,
Let's run it using two 1,000 length vectors. I'm going to time it with tic toc instead of timing because at this stage all I need is a rough estimate of the runtime and time it can take up to 10 times longer than tic toc.
v=randi(1e6,1,1e3);
w=randi(1e6,1,1e3);
tic; A1=rooting_v1(v,w); toc;
Elapsed time is 11.279655 seconds.
wait 11 seconds, but since we know that this is an order n-squared algorithm, we also know that if we needed to do this computation for inputs of length 10,000, we'd be waiting 100 times longer, which works out to over 18 minutes.
That's pretty slow. So here's the question we need to ask; is our algorithm doing any unnecessary work? Yes, it is. Consider the actual computation here. Here we compute the nthroot of v ii and the nested loop jj. Instead of computing it once, storing it and looking it up each time we need it, we compute the same thing over and over unnecessarily. For the example, we timed the links are thousands, so that's a factor of a thousand listed computations. Let's change the code to eliminate that inefficiency, and behold, rooting_v2.
Here inside the outer loop, but just before it starts the inner loop, we pre-compute the nth root of v ii and store it in x. Then for each of the 1,000 times that we needed in the inner loop, as we go through values of j, we just look it up inside x instead of recalculating it. Let's see if that helped.
% function A = rooting_v2(v,w)
% A = zeros(length(v),length(w));
% for ii = 1:length(v)
% x = nthroot(v(ii),ii);
% for jj = 1:length(w)
% A(ii,jj) = x * nthroot(w(jj),jj);
% end
% end
% end
tic; A2=rooting_v2(v,w); toc;
Elapsed time is 5.552899 seconds.
Sure it did and almost doubled our speed. But before we do anything else, let's do this.
isequal(A1,A2)
ans = logical
1
Whenever you make a change to the code to increase efficiency, it's a good idea to test the new version on exactly the same input that you used on the previous version and check that output to see that you get exactly the same result.
But back to this almost doubling of our speed, it's not really all that surprising because we've cut way down on the number of calls to nth root. That's important because it's very computationally intensive. That just means that nth root takes a lot of time,
In our first version of rooting, we call the nth root two million times, and the second version, we called it one million, 1,000 times. So just about half as many times. It's no wonder that the time was cut in half considering how computationally intensive nth root is and all. But is this the best that we can do? Remember the right question to ask is this: Are we doing any unnecessary work?

pre-allocation

We compute a root for each one of the 1,000 elements of w in the inner loop. But since the inner loop is repeated once for every iteration of the outer loop, we're repeating each of these computations 1,000 times. Why don't we calculate all the roots of w first, save them, and then just look them up when we need them in that inner loop, like this?
% function A = rooting_v3(v,w)
% A = zeros(length(v),length(w));
% rw = zeros(1,length(w));
% for jj = 1:length(w)
% rw(jj) = nthroot(w(jj),jj);
% end
% for ii = 1:length(v)
% x = nthroot(v(ii),ii);
% for jj = 1:length(w)
% A(ii,jj) = x * rw(jj);
% end
% end
% end
tic; A3=rooting_v3(v,w); toc;
Elapsed time is 0.044429 seconds.
isequal(A1,A3)
ans = logical
1
an improvement factor of over 100.
Now we call end through to only 2,000 times as opposed to a million. the time spent in nth roots doesn't dominate anymore. We're still doing 1 million multiplications in our nested loop structure, and that takes time. Because of the multiplications in the double loop, this is still an order nth squared algorithm. But now we've eliminated all of the unnecessary work, we're seeing an overall speedup by a factor of over 200 relative to the original algorithm. Not bad for a few extra lines of code.
lets check for input that are infeasible before, that we are estimted will take more than 18 minut in first algorithm
v=randi(1e6,1,1e4);
w=randi(1e6,1,1e4);
tic; A4=rooting_v3(v,w); toc;
Elapsed time is 3.333773 seconds.
4 sec whereas in 1st algorithm it will take 18 minutes not bad.
Pre-computing and saving those nth root values to avoid recalculating them is actually a general technique. We pre-compute and store information so that we can save computation time by not having to carry out the same computation multiple times later on. One way to look at this is that we trade off memory against time. We use more memory, but we save time. In this case, the extra memory was a really small amount, one vector of length n. Well, we achieved a speedup of two orders of magnitude.
There's a line in this function that we've glossed over. A = zeros(length(v),length(w)) The first one. We know the size of A before the calculation and rooting begins because it must have exactly one row for every element in v and one column for every element in w. In all three versions of rooting, we start by setting A equal to a matrix of zeros of that size. This step is called pre-allocation.
It saves time because it forces MATLAB to allocate all the space needed for A before we start assigning values to it. We could just as well set it to ones and zeros. It makes no difference what values we put in there because the nested loops assign a value to each element of A anyway. But it does make a difference in the amount of time required for those assignments. It makes a difference because as the nested loop runs every time the size of A increases by a row or a column, the execution has to halt or the MATLAB runtime system allocates space for the larger A and then copies all the values calculated so far into that larger version. it is a significant amount of time. you can check it by running this
% function A = rooting_v3(v,w)
% %A = zeros(length(v),length(w));
% rw = zeros(1,length(w));
% for jj = 1:length(w)
% rw(jj) = nthroot(w(jj),jj);
% end
% for ii = 1:length(v)
% x = nthroot(v(ii),ii);
% for jj = 1:length(w)
% A(ii,jj) = x * rw(jj);
% end
% end
% end
tic; rooting_v3(v,w); toc;
Elapsed time is 3.455455 seconds.
642 sec ie more than 10 minutes if we dont preallocate.
For large arrays, MATLAB must allocate a new block of memory and copy the old array contents to the new array as it makes each assignment. Programs that change a variable size in this way can spend most of their time running in this inefficient activity. That's exactly what happened when we commented out the preallocation of A and MATLAB is telling you exactly what I told you. Maybe you'll listen to me next time. We can save execution time by figuring out where needless work is being done and avoiding it.

Profiling

let's look at another approach to improving algorithms. As I mentioned, MATLAB provides a nifty profiler. Let's use it to profile our second version of rooting on vectors of length 1,000. First, we'll set up the vectors as we did before.
% v=randi(1e6,1,1e3);
% w=randi(1e6,1,1e3);
% profile off
% profile on
Then we turn the profiler off to eliminate any profiling in progress and then turn it on again. Then we run our function normally without using tick-tock or timing it.
% A2=rooting_v2(v,w);
Finally we run profile with the viewer option.
% profile viewer
Up pops the profiler window. It shows the function that we just called and any functions that it called, which in this case is just nthroot. With each one, it gives the number of calls, which is one for rooting_v2, and 1,001,000 for nthroot. It gives the times that they ran into forms.
we can click any function see details.

another example

now let me show you another example for which the question, is my algorithm doing unnecessary work can save the day? It's an example that lends itself to speed up without requiring the use of additional memory. Instead, it takes advantage of the characteristics of typical input to avoid unnecessary calculations. Consider a social network where people are identified by unique IDs, which are integers from one to N. Let's say that N equals 3,000. People can follow other people and this information is captured in a cell array called follows. Element J of follows is a vector containing a list of the IDs of the people followed by the person with ID J
.
I have a 3,000-element cell vector whose elements are vectors of integers, just like the one in this schematic. Its name is follows too, and it's stored in a mat file named follows.
clear
load follows
whos
Name Size Bytes Class Attributes

follows 1x3000 416256 cell
as there are 3000 people, we have look at 3000*2999/2 pairs.
here's our MATLAB function.
It's called max, same follows V1. The V1 is for version 1. It visits every pair of people in the network and returns in the output argument named people, the IDs of the pair whose list have the largest number of common IDs. It also returns as a second output argument named follows, a vector containing their list of common IDs. The function looks at each of the almost 4.5 million pairs by iterating in the outer loop, whose index is ii, from the first ID to the next to the last ID. Then in the inner loop, starting from ii plus 1 and going to the last ID.
% function [people, follows] = max_same_follows_v1(following)
% people = [];
% num_follows = 0;
% for ii = 1:length(following)-1
% for jj = ii+1:length(following)
% tmp_follows = intersect(following{ii},following{jj});
% n = length(tmp_follows);
% if n > num_follows
% num_follows = n;
% people = [ii jj];
% follows = tmp_follows;
% end
% end
% end
% end
tic; [p1, f1]=max_same_follows_v1(follows); toc
Elapsed time is 232.428943 seconds.
Our function is surprisingly slow, but again, it's an order n squared algorithm. In fact, it's order n cubed in the worst case, because the intersect function is linear by itself, and the follows list could be close to n if everybody follows everybody else. Don't forget that we need to look at over four million pairs and that takes time. Let's look at the output.
p1
p1 = 1×2
293 1096
f1
f1 = 8×1
71
150
334
672
922
1066
1204
1366
Number 293 and number 1,096 had these elements in common. It looks like there's eight of them.
It takes a long time, but are we doing any unnecessary work?
Here's what might happen in practice and probably will happen many times. Let's say we're at a place during the execution of this algorithm where our current candidate pair follow, say, five common people. From that point on, we shouldn't even consider any person who doesn't follow more than five people, because there's no way that this person can be part of a winning pair. So we could save time and skip this person.
Well, here's how we can change the code to take advantage of this insight.
% function [people, follows] = max_same_follows_v2(following)
% people = [];
% num_follows = 0;
% for ii = 1:length(following)-1
% if length(following{ii}) <= num_follows % skip if list
% continue; % is too short
% end
% for jj = ii+1:length(following)
% if length(following{jj}) <= num_follows % skip if list
% continue; % is too short
% end
% tmp_follows = intersect(following{ii},following{jj});
% n = length(tmp_follows);
% if n > num_follows
% num_follows = n;
% people = [ii jj];
% follows = tmp_follows;
% end
% end
% end
% end
We've added an if statement in each of the loops to skip these losers. If you need to, feel free to stop the video to convince yourself that they should be skipped. In any case, whenever the if statement identifies somebody who cannot possibly be part of a winning pair, it terminates the current iteration by means of the continue statement. The continue statement is very similar to the break statement in that they both cause the current iteration to end without further computation.
But unlike the break statement, which quits the loop entirely, the continue statement quits just the current iteration of the loop and continues to the next iteration. If it's in a nested loop, it quits only the loop it's in, not the surrounding loop. It jumps to the end of the body of the loop, and unless it was the last iteration, the loop continues with the next iteration from the top of the body of the loop. Let's see whether it helped.
tic; [p2, f2]=max_same_follows_v2(follows); toc
Elapsed time is 3.266027 seconds.
it helped a lot. That's over a factor of 60 improvement,
isequal(p1,p2) && isequal(f1, f2)
ans = logical
1
and we got the right answers.
if conditions would have been false in every case, and we would not have saved anytime at all. In other words, the worst-case execution times for both versions of the algorithm would be the same.
Nevertheless, this technique can be very useful in practice. The general idea is that if and when you realize that there's no need to continue the current iteration, you should skip the unnecessary work. It can save time.
The general idea of this lecture is that, to increase the efficiency of your algorithm in practice, you should ask yourself whether it's doing unnecessary work.

Practice Quiz-

1.Question What question should you ask yourself when your program runs slow and the algorithm seems to be the best you can come up with?
2.Question What does profiling your code mean?
3.Question Method for reducing unnecessary work:
4.Question Which of the following statements is true?
Correct
True: 1. One algorithm may have long running complicated steps, while another may have simple fast steps. 2. Time complexity tells us how one algorithm scales with the input size. It is not useful to compare the actual run times of two algorithms for certain input. 3. Also, worst case execution time is different from execution time on one particular input.

Vectorization and Other Speed-Ups

Any operator or function that operates on an entire vector or entire array is said to be a Vector command.
the translation of code from a version uses explicit looping to one that uses a vector command is called Vecorization.
for example- if we recall our rooting example 3rd version 200time faster.
% function A = rooting_v3(v,w)
% A = zeros(length(v),length(w));
% rw = zeros(1,length(w));
% for jj = 1:length(w)
% rw(jj) = nthroot(w(jj),jj);
% end
% for ii = 1:length(v)
% x = nthroot(v(ii),ii);
% for jj = 1:length(w)
% A(ii,jj) = x * rw(jj);
% end
% end
% end
v=randi(1e6,1,1e4);
w=randi(1e6,1,1e4);
timeit(@() rooting_v3(v,w))
ans = 3.4456
another approach that's even sophisticated other than that, nth root calculates the root of the first argument that specified by the second argument. That's how he'd been using it. But if you check the documentation for nth root, you'll see that like most numerical functions, it also allows vector input. You can give it two vectors of the same length as inputs and it'll loop through them, finding the root of each element of the first argument that's specified by the corresponding element of the second argument. So if we give the entire vector W as its first argument and the vector of indices of W as the second argument, nth root will return all the roots we need for W in one function call. We can do the same thing to get all the roots we need for v in one function call, like this.
% function A = rooting_v4(v,w)
% A = zeros(length(v),length(w));
% rv = nthroot(v,1:length(v));
% rw = nthroot(w,1:length(v));
% for ii = 1:length(v)
% for jj = 1:length(w)
% A(ii,jj) = rv(ii) * rw(jj);
% end
% end
% end
v=randi(1e6,1,1e4);
w=randi(1e6,1,1e4);
timeit(@() rooting_v4(v,w))
ans = 3.3355
Well, that didn't help a lot. You know why? That's because we're not working on the part of the code that's taking most of the time. The number of operations that these two commands take is proportional to the length and of the inputs. The number of operations in the nested loop is proportional to n squared. That's where most of the time is spent. Let's see if the profiler backs me up on that. I'll run it on version 3.
% profile off;
% profile on;
% rooting_v3(v,w);
% profile viewer
the profilers shows that all the nth root calculations took only 2 percent plus 0.8 percent 2.8 total percent of the time, friends we're barking up the wrong tree, we're off the track,
happening in the nested loop here could be done with matrix multiplication instead of a nested loop. If you did, then go to the head of the class,
% function A = rooting_v5(v,w)
% rv = nthroot(v,1:length(v));
% rw = nthroot(w,1:length(v));
% A = rv' * rw;
% end
clear;
v=randi(1e6,1,1e4);
w=randi(1e6,1,1e4);
timeit(@() rooting_v5(v,w))
ans = 0.2435
this thing lapped v4 almost seven times. This thing is not only beautiful but it's fast. We vectorize the calculation of the roots of v, we vectorize the calculations of roots of w and most importantly, we vectorize the product of the roots to get A.
We're doing all the work and all the calculations we were doing before with exactly the same set of arithmetic operations and the order of our algorithm is still n squared. The only difference is that we're doing it all faster. Well, we're actually not doing anything, the computer is doing it. The speedup comes from the fact that MATLAB's operators and built-in functions are highly optimized, use specific hardware features and are just playing blisteringly fast. The evolution of our simple rooting example from version 1, all the way to version 5, is an example of the art of programming.
Our first solution which was a perfectly reasonable attempt would take over 12 minutes to do the same calculation that our fifth version did in about a fifth of a second. With some thought and a little work, we improve the performance by a factor of 4,000, even though the time complexity of all five versions of our function was exactly the same.force yourself at every opportunity to use built-in operators, built-in functions, whenever you can. Your code will run faster and as a bonus, will be simpler, easier to read, easier to debug and less prone to error than if you stick to explicit loops.
Some tricks take advantage of logical indexing and we provide a thorough introduction to logical indexing both in our textbook and in our first course. We haven't done any time comparisons or profiling of it for you. Let's do that now. I've got a surprise in store for you. Here's a task that can be solved in one line with logical indexing. The task is to take an existing matrix and set to zero every element of it that's smaller than some limit. Let's start with a nested loop solution.
A=[1 2 2 5;2 3 5 5;7 5 5 2]
A = 3×4
1 2 2 5
2 3 5 5
7 5 5 2
size(A)
ans = 1×2
3 4
size(A,1)
ans = 3
size(A,2)
ans = 4
% function A = small2zero_v1(A,limit)
% for ii = 1:size(A,1)
% for jj = 1:size(A,2)
% if A(ii,jj) < limit
% A(ii,jj) = 0;
% end
% end
% end
% end
A=randi(1e6,1e4);
timeit(@() small2zero_v1(A,50))
ans = 3.1142
Not too shabby for 100 million elements.
Now, let's time a solution that uses logical indexing.
% function A = small2zero_v2(A,limit)
% A(A<limit) = 0;
% end
timeit(@() small2zero_v2(A,50))
ans = 0.7791
Almost twice as fast, that's pretty good
tic; A(A<50)=0;toc
Elapsed time is 0.434215 seconds.
By getting rid of the function, overhead did reduce the time by over a factor of three. That's something to consider whenever you're calling a fast function, but are calling it so many times in a loop that the overall speed of the loop is too slow. If that fast function isn't too complicated, then instead of calling the function, you can just put its body into the body of the loop. Being careful when you do to use unique variable names and you'll get a speedup with no change in the algorithm.
The version with implicit looping is almost seven times faster than the version with explicit looping. That's a big speed up,
Let's profile this first version of the function,
% profile off;
% profile on
% small2zero_v1(A,50);
% profile viewer
Why did this function slow down when we profiled it? Another some overhead to profiling, but that can't explain the factor of what? Ten slowdown. Well, here's the thing about the profiler, and here comes the surprise. It turns off MATLAB's code optimizer.
What's that? You didn't know that MATLAB optimizes your code? Well, it does and that's the surprise. First off, the code that MATLAB runs is not the code that you write. That's never true because when you tell MATLAB to run a function, it translates the code into something that can be executed by your computer's processor. MATLAB looks at the whole file and translates it into a form that takes advantage of the processor hardware and that translated code is what actually runs. This translation step is called just-in-time compiling, and it includes optimizations.
One last thing about just-in-time compiling, when that compiler translates the code for a function, it saves the translation into an internal file so that until you add the function or restart MATLAB, it doesn't have to translate it again. For this reason, the first timing of a function's execution after editing it or after restarting MATLAB tends to be noticeably longer than subsequent timings because of the just-in-time translation step. If you clear the function by giving the clear command with the name of the function or give the command clear all, MATLAB deletes the translation. Unless you need to clear it, say to reset a persistent variable, it's best not to.
let's look at another problem. Set every element of the array A to zero that's smaller than say, the second element in its own row.
For that problem, it might seem that a nested loop solution is the only option. Say like this.
% function A = row2explicit(A)
% for ii = 1:size(A,1)
% for jj = 1:size(A,2)
% if A(ii,jj) < A(ii,2)
% A(ii,jj) = 0;
% end
% end
% end
% end

Repmat

Well, it's time to trot out the built-in function named repmat. Repmat stands for replicating matrices, and repmat makes its living by making copies of vectors and arrays. It's helpful in situations like this, where it's necessary to compare every element in the matrix, two elements in one column of the matrix or of another matrix, or two elements in one row or in some rectangular subset of a matrix. Let's start by getting a random matrix to work with.
Now, lets call repmat.
A=randi(99,4,5)
A = 4×5
15 32 92 3 47
28 29 51 57 3
87 44 63 5 7
60 90 72 42 92
Tile_column_2=repmat(A(:,2),1,size(A,2))
Tile_column_2 = 4×5
32 32 32 32 32
29 29 29 29 29
44 44 44 44 44
90 90 90 90 90
A(A<Tile_column_2)=0
A = 4×5
0 32 92 0 47
0 29 51 57 0
87 44 63 0 0
0 90 0 0 92
Let's cook up a command that will locate the indices of odd elements in a vector of integers. Our recipe calls for three ingredients. The function find, the function mod, and a logical array. First, let's refresh our memories about how mod works,
Our recipe calls for three ingredients.
The function find, the function mod, and a logical array. First, let's refresh our memories about how mod works,
mod(7,2)
ans = 1
mod(13,4)
ans = 1
mod(48,7)
ans = 6
help rng
rng Control the random number generator used by RAND, RANDI, and RANDN.
rng(SD) seeds the random number generator using the non-negative
integer SD so that RAND, RANDI, and RANDN produce a predictable
sequence of numbers.

rng('shuffle') seeds the random number generator based on the current
time so that RAND, RANDI, and RANDN produce a different sequence of
numbers after each time you call rng.

rng(SD,GENERATOR) and rng('shuffle',GENERATOR) additionally specify the
type of the random number generator used by RAND, RANDI, and RANDN.
GENERATOR is one of:

Generator Description
------------------------------------------------------------------
'twister' Mersenne Twister
'simdTwister' SIMD-oriented Fast Mersenne Twister
'combRecursive' Combined Multiple Recursive
'multFibonacci' Multiplicative Lagged Fibonacci
'v5uniform' Legacy MATLAB 5.0 uniform generator
'v5normal' Legacy MATLAB 5.0 normal generator
'v4' Legacy MATLAB 4.0 generator

rng('default') puts the settings of the random number generator used by
RAND, RANDI, and RANDN to their default values so that they produce the
same random numbers as if you restarted MATLAB. In this release, the
default settings are the Mersenne Twister with seed 0.

SCURR = rng returns the current settings of the random number generator
used by RAND, RANDI, and RANDN. The settings are returned in a
structure SCURR with fields 'Type', 'Seed', and 'State'.

rng(S) restores the settings of the random number generator used by
RAND, RANDI, and RANDN back to the values captured previously by
S = rng.

SPREV = rng(...) returns the previous settings of the random number
generator used by RAND, RANDI, and RANDN before changing the settings.

Example 1: Retrieve and Restore Generator Settings
s = rng % get the current generator settings
x = rand(1,5) % RAND generates some values
rng(s) % restore the generator settings
y = rand(1,5) % generate the same values so x and y are equal

Example 2: Restore Settings for Legacy Generator
prevS = rng(0,'v5uniform') % use legacy generator, save previous settings
x = rand % legacy startup value .9501
rng(prevS) % restore the previous settings

See Replace Discouraged Syntaxes of rand and randn to use rng to replace
RAND or RANDN with the 'seed', 'state', or 'twister' inputs.

See also rand, randi, randn, RandStream, now.

Reference page for rng
rng(0);
v=randi(99,1,10)
v = 1×10
81 90 13 91 63 10 28 55 95 96
ones_zeros=mod(v,2)
ones_zeros = 1×10
1 0 1 1 1 0 0 1 1 0
true_false=ones_zeros==1
true_false = 1×10 logical array
1 0 1 1 1 0 0 1 1 0
idx_odd=find(true_false)
idx_odd = 1×6
1 3 4 5 8 9
Finally, let's search for non-zero values in this logical array.We find them at index one, three, four, five, eight and nine.
help find
find Find indices of nonzero elements.
I = find(X) returns the linear indices corresponding to
the nonzero entries of the array X. X may be a logical expression.
Use IND2SUB(SIZE(X),I) to calculate multiple subscripts from
the linear indices I.

I = find(X,K) returns at most the first K indices corresponding to
the nonzero entries of the array X. K must be a positive integer,
but can be of any numeric type.

I = find(X,K,'first') is the same as I = find(X,K).

I = find(X,K,'last') returns at most the last K indices corresponding
to the nonzero entries of the array X.

[I,J] = find(X,...) returns the row and column indices instead of
linear indices into X. This syntax is especially useful when working
with sparse matrices. If X is an N-dimensional array where N > 2, then
J is a linear index over the N-1 trailing dimensions of X.

[I,J,V] = find(X,...) also returns a vector V containing the values
that correspond to the row and column indices I and J.

Example:
A = magic(3)
find(A > 5)

finds the linear indices of the 4 entries of the matrix A that are
greater than 5.

[rows,cols,vals] = find(speye(5))

finds the row and column indices and nonzero values of the 5-by-5
sparse identity matrix.

See also sparse, ind2sub, relop, nonzeros.

Reference page for find
Other functions named find
Finally, let's search for non-zero values in this logical array.We find them at index one, three, four, five, eight and nine.
find(mod(v,2)==1)
ans = 1×6
1 3 4 5 8 9
find can give you row and column indices for two-dimensional arrays too, let's form a three by four array of integers.
rng(0);
A=randi(99,3,4)
A = 3×4
81 91 28 96
90 63 55 16
13 10 95 97
[row,col]=find(mod(A,2)==1)
row = 7×1
1
3
1
2
2
3
3
col = 7×1
1
1
2
2
3
3
4
[row,col]
ans = 7×2
1 1
3 1
1 2
2 2
2 3
3 3
3 4
I put row, column and makes an array row, column, row, column, row, column. That tells you that in the array a, at element 1, 1, we have an odd number. At element 3, 1, an odd number, at element 1, 2, an odd number, and so on.
If you wanted to find evens, for example, we would use the same logical expression except we would change the one to zero.
Beyond vectorization, we've got a few other tricks up our sleeves that can help to speed up your algorithm. Now begins the other speedup section of this lecture from the title slide.
First stop: recursion,
First, stop, recursion,
as in stop using it. A recursion is an elegant way to solve many problems, it can come with a performance penalty. As we illustrated using the factorial problem, most of the time, an iterative solution will be faster.

Modes of Passing Arguments

MATLAB then watches that function like a hawk throughout its execution, and if it ever starts to change so much is one bit of that argument. Even if it's just one element of a 100 million element array, MATLAB pauses execution, copies the entire array onto the stack then in there, and then continues execution. At that point, call by reference is abandoned for that variable, and MATLAB switches to call by value, which is a fancy way of saying that the argument is copied onto the stack. Therefore, unless it's absolutely necessary, you should avoid changing elements of potentially large input arguments in any functions.
Let's see an example.
% function mx=input_mod_test(A)
% mx=max(A(:));
%
% end
rng(0);
x=randi(1e6,2e4);
timeit(@() input_mod_test(x))
It takes 0.26 seconds. Now, let's make a trivial modification. We change one element, and run the code again.
% function mx=input_mod_test(A)
% mx=max(A(:));
% A(1)=0;
% end
timeit(@() input_mod_test(x))
It took over 10 times longer. All that extra time was spent making a copy of this array instead of passing it by reference because of this one change and the called function. So keep this in mind when you work with large arrays and pass them to functions. But one last comment about avoiding needless copying of the input arguments,
The next item on our list of other speedups is index reordering.

Index Re-ordering

there are good reason to change the order like this,
or like this
when pre-allocation is not possible, there is sometimes a way to avoid total disaster. Take a look at this example
% function A = not_preallocatable_v1(N)
% % from COMPUTER PROGRAMMING WITH MATLAB, 3rd Edition, 2015
% % by J. M. Fitzpatrick and A. Ledeczi
% % Chapter 2, Section 4.9
% ii = 0;
% while rand > 1/N
% ii = ii + 1;
% for jj = 1:N
% A(ii,jj) = ii + jj^2;
% end
% end
This function produces a matrix whose row size is determined by its input argument. But it's column size depends on the numbers that happen to come out of a random number generator, so its size can't be known in advance. The use of the random number generator to make it unknown is done here only for the sake of illustrating the issue with a simple example. In general, it might be unknown for some other reason. But in any case, as a result of its ultimate size being unknown, pre-allocation is not feasible. So A must be reallocated every time the outer loop iterates, and a new row is needed.
Let's time it.
rng(0);
tic;
A1=not_preallocatable_v1(3000);
toc
Elapsed time is 27.312595 seconds.
That's definitely slow. What to do? Well, one solution is to allocate a matrix of the size that's larger than the expected size required, and then trim it down to the actual size if necessary or let it grow in the loop. But we want to introduce the idea of reordering the indexing and that can solve the problem too.
Let's look at an implementation that does that.
% function A = not_preallocatable_v2(N)
% % from COMPUTER PROGRAMMING WITH MATLAB, 3rd Edition, 2015
% % by J. M. Fitzpatrick and A. Ledeczi
% % Chapter 2, Section 4.9
% ii = 0;
% while rand > 1/N
% ii = ii + 1;
% for jj = 1:N
% A(jj,ii) = ii + jj^2;
% end
% end
% A = A';
that the inner loop increments the column index instead of the row index. When the inner loop is done with its column, the outer loop increments the row index, which shifts as one column to the right and then the inner loop starts adding elements so that column below the previous column. To summarize each iteration of the inner loop adds an element of the current column below the previous element and each iteration of the outer loop adds a column below the previous column. As a result, a new column is added for each iteration of the outer loop instead of a new row. Of course, these loops end up calculating the transpose of the matrix we want, so A must be transposed after the looping is done. That transposition requires time too, but it's a single operation that's implemented with low level processing that's so fast that we can neglect it. All we're doing here is assigning values to the elements of A in a different order, but what a difference the order makes,
rng(0);
tic;
A2=not_preallocatable_v2(3000);
toc
Elapsed time is 0.380766 seconds.
almost 100 times faster.
isequal(A1,A2)
Incrementing the column index and the inner loop gives column major order, and incrementing the row index gives row major order and column major order is faster.
Here's an illustrative function that allows you to write either order with or without pre-application when assigning values to a matrix. The first two arguments are the column and row lengths, and the third and fourth arguments determine whether or not column major order is used and whether or not preallocation is used. The body of the function provides a column major versus row major option by means of the choice of two different nested loops. It puts the row major order in second place
% function A = stride_right(M,N,col_major,preallocate)
% if preallocate, A = zeros(M,N); end
% if col_major
% for ii = 1:N
% for jj = 1:M
% A(jj,ii) = 11*jj + 123*ii;
% end
% end
% else % row major
% for ii = 1:M
% for jj = 1:N
% A(ii,jj) = 11*ii + 123*jj;
% end
% end
% end
% end
this is the point of showing you this code, you can see that row-major order is the more natural way to program nested loops. Because it's more natural to write the indices that specify an element of the array ii, jj in the same order as they appear in the loop control statements, ii, jj. On the other hand, to go for the gold up here in the column major approach, you have to put the indices that specify the element of the array in the order jj, ii, opposite to the order ii, jj in the loop control statements. You have to swap the limits m and n too. But that's all it takes to achieve stride one access to a and that's all it takes to win the gold.
let's test my claim that column major order is faster even when reallocation is not an issue. Let's run this example both ways and time it on a 10,000 by 20,000 array.
timeit(@() stride_right(1e4,2e4,false,true))
ans = 7.8603
timeit(@() stride_right(1e4,2e4,true,true))
ans = 1.1080
So column major is 10 time faster.
So why is the more natural way, the less efficient way? Well, it's the result of an unfortunate language design decision made over 60 years ago that we're still living with today. It happened on that fateful day in 1954 when the designers of Fortran, the first high level language for numerical computing, decided to map the order of array elements in the memory onto column major order in the language.
So the stride one index is the first one instead of the last one. It's unfortunate because as our example shows, when programmers write nested loops, they are naturally prone to iterate over the first index and the first loop, the second index and the second loop and so on. As a result, the stride in the innermost loop, instead of being one, is as large as possible.
MATLAB followed the Fortran lead probably because programmers moving from Fortran to MATLAB expected it to be that way.
Other languages like C and C++ use row major order. But programmers in the know, know the order of their language and take advantage of it.
There's one more kind of loop in MATLAB, and we're going to finish with that. It's called parfor, spelled P-A-R-F-O-R, and it stands for parallel for-loop.

parfor

it's available only if you have the parallel computing toolbox installed. What parfor does is run iterations of its body in parallel, that is, simultaneously on separate CPU cores if, that is, your computer has multiple CPU cores.
There is one restriction though. The body of the loop cannot depend on earlier iterations of the same loop. That is, parfor works only if the loop body can be executed in any order with no effect on the outcome.
There's an issue of practicality, which is that parfor makes sense only if the body of the loop is slow enough to make up for the extra time required by parfor, for the overhead of dividing up the work across multiple cores, sending them the data they need to work on, and then collecting the results. So a parfor version of an iteration can be slower than a regular for loop. But in scientific computing, the body of a for loop is often very slow because it does a large number of complicated computations.
To show how it works, let's try an example similar to what we're showing here in our schematic, which is a modification of a MATLAB example in the documentation for parfor. We'll start with a for loop version first, and then switch to parfor with the same loop body. Our for loop version is called eigen_for, and it computes the eigenvalues of a set of matrices stored in a three-dimensional array named A3D.
% function a= eigen_for(A3D)
% a = zeros(1,size(A3D,1));
% for ii = 1:length(a)
% a(ii) = max(abs(eig(squeeze(A3D(ii,:,:)))));
% end
% function a= eigen_parfor(A3D)
% a = zeros(1,size(A3D,1));
% parfor ii = 1:length(a)
% a(ii) = max(abs(eig(squeeze(A3D(ii,:,:)))));
% end
The only difference between this function, which is named eigen_parfor, and eigen_for is that the keyword for has been replaced by the keyword parfor in this one. I'm going to time both of them on the same array. Since they're a bit slow and the exact timing is not that important, I'll use tick-tock again.
We're going to calculate the eigenvalues for 50,000 matrices of size 50 by 50. Let's start with the for loop version.
A3D=rand(5e4,50,50);
tic;
af=eigen_for(A3D);
toc
Elapsed time is 42.335058 seconds.
%help rand
%help randi
rand(5,3,2)
ans =
ans(:,:,1) =

0.3814 0.0028 0.9169
0.0806 0.6451 0.8990
0.1981 0.5741 0.6586
0.7718 0.2447 0.3374
0.8205 0.2228 0.3701


ans(:,:,2) =

0.7627 0.9250 0.1133
0.7511 0.3270 0.8270
0.3729 0.0876 0.6378
0.9341 0.1553 0.8012
0.1300 0.7916 0.1705
randi(5,3,2)
ans = 3×2
3 2
2 1
1 4
Now, before we run parfor, I want to do something first, and that's to command MATLAB to prepare CPU cores for parfor to run on. We do that by clicking on these little gray bars down here. If you don't see them in your version, then that means that your system doesn't have the parallel computing toolbox installed.
tic;
ap=eigen_parfor(A3D);
toc
Elapsed time is 29.862379 seconds.
With four cores, we gained roughly a factor of three speedup. Each core performs about one-fourth of the iterations. They're all working at the same time, so you might expect a factor of four speed up. But actually a factor three is reasonable. Theoretically, four is the maximum of course, but because of the overhead we discussed a few minutes ago, three is probably the best we can reasonably hope for in most cases when using four cores.
isequal(af, ap)
ans = logical
1
we recommend that you read the documentation because those details can help you improve your results. An example is that you might get an even greater speed up if you use the multiple threads option, which allows the cores to share memory and so reduces communication times. You do that with the parpool function. But the threads option doesn't work for all loop bodies. Like I said, you need to read the documentation to get the most out of parallel processing.

Summary

Practice Quiz

% Pramod Kumar Yadav
%[linkedin,insta,fb,github]@iampramodyadav
1.Question Which of the following statements is false?
2.Question An input argument to a MATLAB function
3.Question What can we do to make sure our program is efficient when preallocation of a matrix is not possible?
4.Question Which of the following is false?

Object Oriented Programming

Graphical User Interfaces

[Separate Live script.]
% Pramod Kumar Yadav
%[linkedin,insta,fb,github]@iampramodyadav